OSDev.org

The Place to Start for Operating System Developers
It is currently Tue Mar 19, 2024 3:25 am

All times are UTC - 6 hours




Post new topic Reply to topic  [ 294 posts ]  Go to page Previous  1 ... 16, 17, 18, 19, 20  Next
Author Message
 Post subject: Re: Dodgy EDIDs (was: What does your OS look like?)
PostPosted: Sat Feb 20, 2016 7:47 pm 
Offline
Member
Member
User avatar

Joined: Wed Oct 27, 2010 4:53 pm
Posts: 1150
Location: Scotland
onlyonemac wrote:
Something like office software could be a problem: formatting text, moving elements around on the page, and so on are all tasks which are difficult to do with an audio interface but which if blind people were unable to do they would be very frustrated at being unable to produce documents that look as professional as those produced by a sighted person.

That must be a particularly hard thing for people to do if they were born blind, but I can easily understand how people who have gone blind and who can still visualise what documents look like to sighted people will be determined to be able to make use of that ability rather than being forced to bring in a sighted person to arrange everything for them. So it's a good example: I can see straight away how low on the priority list it would be for any application writer to provide the required functionality, so the tools needed to deal with this either need to be standard parts of the operating system or something else added to the operating system to work as if it is part of the operating system. If any part of the functionality that it offers is useful to sighted users too, then it would be best if that functionality was a standard part of the operating system rather than an optional addition to it. Any functionality which is not useful to sighted users though could perhaps remain part of a separate package which blind users would download and install into the OS, but it's only going to be worth keeping it separate if there's a lot of functionality in that package. I very much doubt that there would be enough of it for it to make sense not just to build it all directly into the OS.

Quote:
My screenreader will read out text formatting (such as the font name, size, colour, whether the text is bold/itallic/underlined, and whatever else I've configured it to read out)

I would want to set it up to read in different accents to indicate the font, to change the pitch of voice to indicate text size and perhaps change the mood of the voice to represent associations with different colours. That would make it less irritating when there are lots of changes, and it would be faster to process. I would also want it to be able to spell out the formatting too though, and I can't see any reason why any of that would only be of use to blind people. If I'm writing something while walking in the countryside, I could get the formatting right at the time of writing instead of having to leave it till I get home.

Quote:
and when I'm producing a presentation in PowerPoint it reads out the size and position of the elements on the slide as I move them around (and I think there's also a key combination to read them out on demand but I never use that feature) so I can estimate where something is on the slide and get an idea of how the presentation will look.

And that's the kind of task that a sound screen would handle, giving you something close to a visual representation of the layout of the content. I am not blind, but I want to use a sound screen and that's what motivates me to think about developing one. Again it would be useful to be able to get it to state exact locations and positions of things so that they can be adjusted with precision. A lot of GUI software doesn't allow you to line things up perfectly because it's hard to get it right by eye even when you can see that it's wrong - you want actual numbers for the locations, and you want to be able to adjust those numbers to move things around.

Quote:
Sighted users aren't going to want to fuss with listening to descriptions of formatting and try to guess where the elements on the slides are, so chances are audio interface developers are going to leave those features out of the audio interface, thus putting blind users at a disadvantage.

Some sighted users will want all of that, and the best way to get it is to make sure it's made a part of the operating system. Once a need for a control is recognised, it should be added. The problem comes from operating system developers having other priorities which they want to deal with first, while someone writing software to open up the machine to blind people puts higher priority on providing that special functionality, but the better answer's for them to become part of the team of people developing the operating system and to make sure everything's done right.

Quote:
That's why, even if a separate audio interface is provided, having built-in support for something like a screenreader designed to work with the graphical interface when the audio interface is too restrictive moves the responsibility off of the application developers (who have repeatedly proven themselves to not care about software accessibility) to the screenreader developers (who's main focus is software accessibility).

Rather than having a screenreader designed to operate a GUI that's specifically designed for sighted people, you'd be better off with a universal user interface which can handle input from all devices and make the way they're used as flexible as possible. If you want to the machine to tell you anything through words which it normally tells you visually, it should be possible to instruct it to do just that. In the same way, if it normally tells you something by making a noise, it should be possible to instruct it to display something visual instead. You are right that the application developers should not be left to deal with these issues, but screenreader software that's designed to operate through a GUI is always going to be following the GUI (and may be limited by it in many ways) instead of taking its own lead and acting on things directly.

The real issue is bad user interface design, and the best way to fix it is to work directly on improving that rather than bolting on an audio converter to a primitive GUI (which is what all GUIs currently are). We need people to stop thinking of a GUI as an isolated unit, because it should just be part of a bigger user interface with many overlaps in functionality between all the different input and output devices. With a GUI, you're often forced to go through a long chain of badly thought out menus to get simple things done, and why would you want to have to go through that same chain of garbage using sound commands instead of just naming the thing you want to get to and going there directly? A proper speech user interface would take you straight to where you want to go, and if you have one of those, the graphical user interface should be integrated with it so that it too can take that direct path - both interfaces need to be designed to be part of the same universal user interface so that they can both offer the same functionality as standard wherever it is possible to do so. That is the direction to go in with this instead of keeping different user interfaces in separate compartments and then bolting disability software packages to them to provide the limited functionality of one user interface through another. I think the way forward is to unify the whole lot as one box of tricks which doesn't perpetuate unnecessary divides that restrict functionality.

_________________
Help the people of Laos by liking - https://www.facebook.com/TheSBInitiative/?ref=py_c

MSB-OS: http://www.magicschoolbook.com/computing/os-project - direct machine code programming


Top
 Profile  
 
 Post subject: Re: Dodgy EDIDs (was: What does your OS look like?)
PostPosted: Mon Feb 22, 2016 2:51 pm 
Offline
Member
Member

Joined: Sat Mar 01, 2014 2:59 pm
Posts: 1146
DavidCooper wrote:
The real issue is bad user interface design, and the best way to fix it is to work directly on improving that rather than bolting on an audio converter to a primitive GUI (which is what all GUIs currently are). We need people to stop thinking of a GUI as an isolated unit, because it should just be part of a bigger user interface with many overlaps in functionality between all the different input and output devices. With a GUI, you're often forced to go through a long chain of badly thought out menus to get simple things done, and why would you want to have to go through that same chain of garbage using sound commands instead of just naming the thing you want to get to and going there directly? A proper speech user interface would take you straight to where you want to go, and if you have one of those, the graphical user interface should be integrated with it so that it too can take that direct path - both interfaces need to be designed to be part of the same universal user interface so that they can both offer the same functionality as standard wherever it is possible to do so. That is the direction to go in with this instead of keeping different user interfaces in separate compartments and then bolting disability software packages to them to provide the limited functionality of one user interface through another. I think the way forward is to unify the whole lot as one box of tricks which doesn't perpetuate unnecessary divides that restrict functionality.
I agree with this notion, although I don't believe that Brendan's approach to implementing something of this kind is likely to be effective. For that matter, I have considered such interface design concepts myself although I believe that it would be very challenging to implement this in a way that actually works for all users.

_________________
When you start writing an OS you do the minimum possible to get the x86 processor in a usable state, then you try to get as far away from it as possible.

Syntax checkup:
Wrong: OS's, IRQ's, zero'ing
Right: OSes, IRQs, zeroing


Top
 Profile  
 
 Post subject: Re: Dodgy EDIDs (was: What does your OS look like?)
PostPosted: Mon Feb 22, 2016 3:35 pm 
Offline
Member
Member

Joined: Sat Mar 01, 2014 2:59 pm
Posts: 1146
Brendan wrote:
onlyonemac wrote:
Do you think a mouse requires a radically different user interface to a handwriting recognition system? Yes, it does, but you're trying to group those all into the same set of events, and trying to map all input devices to all of the events even if those devices can't reasonably be mapped to those events.


It does? Why? It sounds like a massive lie to me.

Choose any type of application you like and any output device you like. Then choose any 2 combinations of one or more input devices (e.g. speech alone vs. keyboard+mouse). Finally; describe in detail how that front end (for your one chosen application and your one chosen output device) must be different depending on which of 2 and only 2 combinations of input devices are being used. I expect you to anticipate my reply and try your best to find case/s that are irrefutable (including attempting to find work-arounds to any problems yourself). Choose the application and devices wisely. Assume that you have one and only one chance to prove beyond any doubt that you are not a liar.
Output device:
  • Monitor
Input devices:
  • Keyboard and mouse
  • Handwriting recognition
Application:
  • Word processor
For keyboard and mouse input:

Frontend will take text input from the keyboard and add it to the document. Frontend will also implement keyboard shortcuts (let's ignore the details of whether these are handled in the OS, the GUI toolkit, or the application itself) to allow the user to quickly access common commands (change formatting, insert a picture, save the document, etc.). Frontend will allow the user to move insertion point by clicking the mouse at the desired location, and to select text by dragging through the text. Frontend will display menus and/or toolbars with all available commands (change formatting, insert a picture, save the document, etc.) which may be selected with the mouse or navigated with the keyboard (alt-f to open the file menu, for example).

For handwriting recognition input:

Frontend will take text input from the handwriting recognition system (implemented in either software or hardware) and add it to the document. Frontend will also implement one or more gestures to enter a "command" mode, whereafter the user can write the name of a command (with the available commands probably correlating to those that appear in the menus of the keyboard and mouse frontend) to have it performed. Frontend may implement a degree of "intelligence" to determine the intended command from a slightly incorrect name (e.g. if the user writes "add picture" instead of the correct command "insert image", the "insert image" command will nevertheless be performed). Frontend will use a similar system of gestures and/or commands to allow the user to move the insertion point and to select text.

The differences:

The first frontend must display menus and/or toolbars on the screen, whereas the second frontend can use the entire screen for the document. The first frontend lists the available commands in a menu and receives a "click" event on the command selected by the user, whereas the second frontend receives the name of the command entered by the user directly (which furthermore may not be a valid command). The first frontend accepts input in multiple "modes" (i.e. "edit" mode and "command" mode) simultaneously (in terms of that the user can type on the keyboard and select a menu option with the mouse without having to explicitly change mode), whereas the second frontend can accept input in one mode at a time only because the same process of writing text on the handwriting recognition device is used to both enter text into the document and enter commands.

Why the frontends need to be different:

To navigate the first frontend with the handwriting recognition system would require the use of gestures to navigate the menus like how they are navigated with the keyboard (especially if we are going to abstract the difference between the keyboard and the handwriting recognition system). So, for example, to save the document the user would have to draw one gesture to represent the "alt" key, then write the letter "f", then write the letter "s" to jump to the "save" option. Or they might have to draw gestures to navigate up and down if there isn't a shortcut key for the desired option (like how they would navigate around the menus with the arrow keys on a keyboard). Or they would have to draw one gesture to represent the "ctrl" key, then write the letter "s" to input the keyboard shortcut for the "save" option. In short, trying to abstract input devices by e.g. treating a handwriting recognition system the same as a keyboard (i.e. something that sends "keypress" events or "navigate up"/"navigate down" events) would in a case like this make the interface more difficult to use for the handwriting recognition user than it needs to be.

Alternatively, you might try to make the keyboard and mouse frontend the same as the handwriting recognition frontend. If we ignore the mouse for a moment, this leaves us with keyboard input taking the place of the handwriting recognition system. So now for the user to navigate the interface with the keyboard, they would be required to first press a particular key combination to enter "command" mode (to represent the enter-command-mode gestures on the handwriting recognition system) and then type the name of the command that they want, instead of being able to see a nice list of all the commands and navigate straight to them by pressing a single letter or a few arrow keys.

Note that most of these differences may be abstracted by the OS and/or the device drivers, but in such a setup the application would have to say "these are my available commands and these are the input modes that I use" to the OS and then the OS determines the best way to present the interface to the user (e.g. if there's a mouse present it can display a set of menus, if there's a keyboard present it can allow the use of keyboard shortcuts, if there's a handwriting recognition system present it can allow the user to write the name of the command in a "command" mode). While this is a perfectly plausible way of structuring the interface subsystem of the OS, it restricts the flexibility for applications to design their own interfaces beyond choosing what commands to present to the user; furthermore, I believe, you are structuring your events along the lines of "navigate this way" or "enter this letter" rather than "select this command", which as I have explained in the previous two paragraphs is going to make own or more interfaces ("frontends") less efficient than it should be.

_________________
When you start writing an OS you do the minimum possible to get the x86 processor in a usable state, then you try to get as far away from it as possible.

Syntax checkup:
Wrong: OS's, IRQ's, zero'ing
Right: OSes, IRQs, zeroing


Top
 Profile  
 
 Post subject: Re: Dodgy EDIDs (was: What does your OS look like?)
PostPosted: Mon Feb 22, 2016 9:02 pm 
Offline
Member
Member
User avatar

Joined: Sat Jan 15, 2005 12:00 am
Posts: 8561
Location: At his keyboard!
Hi,

onlyonemac wrote:
Brendan wrote:
Choose any type of application you like and any output device you like. Then choose any 2 combinations of one or more input devices (e.g. speech alone vs. keyboard+mouse). Finally; describe in detail how that front end (for your one chosen application and your one chosen output device) must be different depending on which of 2 and only 2 combinations of input devices are being used. I expect you to anticipate my reply and try your best to find case/s that are irrefutable (including attempting to find work-arounds to any problems yourself). Choose the application and devices wisely. Assume that you have one and only one chance to prove beyond any doubt that you are not a liar.
Output device:
  • Monitor
Input devices:
  • Keyboard and mouse
  • Handwriting recognition
Application:
  • Word processor
For keyboard and mouse input:

Frontend will take text input from the keyboard and add it to the document. Frontend will also implement keyboard shortcuts (let's ignore the details of whether these are handled in the OS, the GUI toolkit, or the application itself) to allow the user to quickly access common commands (change formatting, insert a picture, save the document, etc.). Frontend will allow the user to move insertion point by clicking the mouse at the desired location, and to select text by dragging through the text. Frontend will display menus and/or toolbars with all available commands (change formatting, insert a picture, save the document, etc.) which may be selected with the mouse or navigated with the keyboard (alt-f to open the file menu, for example).

For handwriting recognition input:

Frontend will take text input from the handwriting recognition system (implemented in either software or hardware) and add it to the document. Frontend will also implement one or more gestures to enter a "command" mode, whereafter the user can write the name of a command (with the available commands probably correlating to those that appear in the menus of the keyboard and mouse frontend) to have it performed. Frontend may implement a degree of "intelligence" to determine the intended command from a slightly incorrect name (e.g. if the user writes "add picture" instead of the correct command "insert image", the "insert image" command will nevertheless be performed). Frontend will use a similar system of gestures and/or commands to allow the user to move the insertion point and to select text.

The differences:

The first frontend must display menus and/or toolbars on the screen, whereas the second frontend can use the entire screen for the document. The first frontend lists the available commands in a menu and receives a "click" event on the command selected by the user, whereas the second frontend receives the name of the command entered by the user directly (which furthermore may not be a valid command). The first frontend accepts input in multiple "modes" (i.e. "edit" mode and "command" mode) simultaneously (in terms of that the user can type on the keyboard and select a menu option with the mouse without having to explicitly change mode), whereas the second frontend can accept input in one mode at a time only because the same process of writing text on the handwriting recognition device is used to both enter text into the document and enter commands.

Why the frontends need to be different:

To navigate the first frontend with the handwriting recognition system would require the use of gestures to navigate the menus like how they are navigated with the keyboard (especially if we are going to abstract the difference between the keyboard and the handwriting recognition system). So, for example, to save the document the user would have to draw one gesture to represent the "alt" key, then write the letter "f", then write the letter "s" to jump to the "save" option. Or they might have to draw gestures to navigate up and down if there isn't a shortcut key for the desired option (like how they would navigate around the menus with the arrow keys on a keyboard). Or they would have to draw one gesture to represent the "ctrl" key, then write the letter "s" to input the keyboard shortcut for the "save" option. In short, trying to abstract input devices by e.g. treating a handwriting recognition system the same as a keyboard (i.e. something that sends "keypress" events or "navigate up"/"navigate down" events) would in a case like this make the interface more difficult to use for the handwriting recognition user than it needs to be.

Alternatively, you might try to make the keyboard and mouse frontend the same as the handwriting recognition frontend. If we ignore the mouse for a moment, this leaves us with keyboard input taking the place of the handwriting recognition system. So now for the user to navigate the interface with the keyboard, they would be required to first press a particular key combination to enter "command" mode (to represent the enter-command-mode gestures on the handwriting recognition system) and then type the name of the command that they want, instead of being able to see a nice list of all the commands and navigate straight to them by pressing a single letter or a few arrow keys.

Note that most of these differences may be abstracted by the OS and/or the device drivers, but in such a setup the application would have to say "these are my available commands and these are the input modes that I use" to the OS and then the OS determines the best way to present the interface to the user (e.g. if there's a mouse present it can display a set of menus, if there's a keyboard present it can allow the use of keyboard shortcuts, if there's a handwriting recognition system present it can allow the user to write the name of the command in a "command" mode). While this is a perfectly plausible way of structuring the interface subsystem of the OS, it restricts the flexibility for applications to design their own interfaces beyond choosing what commands to present to the user; furthermore, I believe, you are structuring your events along the lines of "navigate this way" or "enter this letter" rather than "select this command", which as I have explained in the previous two paragraphs is going to make own or more interfaces ("frontends") less efficient than it should be.


First, the hand-writing system is typically a monochrome LCD touch screen designed for use with a stylus; which means it'd be trivial to have a "touch pad mode" that can be used for navigating menus, etc. It would also be trivial to add (e.g.) buttons and/or scroll bars around the edges of the hand-writing system that can be used without a special mode (including buttons for up/down/left/right events).

For your hand-writing system there's no way for the user to discover which commands do what (which is hideously broken), and a menu system is required for that purpose. The menu system (that is required for both keyboard and handwriting, even if its just so the user can discover commands and/or keyboard shortcuts) would do what most/all menu systems do and have things like "Open File", "Save", etc; where the underscore tells the user the keyboard shortcut or command. Essentially, you see "Open File" in the menu and know that "alt + O" is the keyboard shortcut to open a file, and know that writing "O" in the handwriting system is the command to open a file. When a user presses "alt + Z" the keyboard driver sends the "z command" without knowing/caring if the front end supports the command. When the user draws "Z" the handwriting system sends the "z command" without knowing/caring if the front end supports the command.

Note that the menu system is part of a widget service (and implemented in a separate process). When the menu system is started it loads a file describing the menus for the current locale (for internationalisation) - e.g. for an English user it might load the file "yourapp/menu.en" and for a Russian user it might load the file "yourapp/menu.ru". The front end just forwards "user commands" (whether they're key presses or drawn by handwriting) to the menu system, and when the menu system is open the front end forwards mouse clicks and "handwriting system in touchpad mode double taps" (which are the same event as mouse clicks) to the menu system. The menu system sends back whatever the file (for the user's locale) says to send back. For English, for the "S command" (from the "alt+S" keyboard shortcut or from drawing "S") the file might say to send back a "save file message", and clicking on "Save File" in the menu (with either mouse or "handwriting system in touchpad mode") would also cause the menu system to send back the exact same "save file message". For Russian, for the "C" command the file might say to send back a "save file message", and clicking on "сохранить файл" in the menu (with either mouse or "handwriting system in touchpad mode") would also cause the menu system to send back the same "save file message". The front end just receives the "save file message" that was defined/created by the front-end's own developer (e.g. using an "enum"). The front end doesn't care what the input devices are, and also doesn't care what the current locale is (and doesn't care that the Russian word for "save" is "сохранить" where it makes no sense at all to use "S" for Russian).

Of course internationalisation for everything else uses a similar scheme based on one file per locale/language. You don't print "Hello world", you ask a service to find "string #1 for the user's locale/language" and print that. This means that front-ends can be translated to different languages just by editing files; which is important because very few programmers know every language. It also means that (with a suitable search order) the end user can override these files. E.g. I could create a "home/bcos/yourapp/menu.en" file (and a "home/bcos/yourapp/strings.en" file) that would be used instead of the files that came with the front end, and I could completely change the menu system to suit myself; and if I want to change "Open File" to "Load File" then that's perfectly fine - pressing "alt+L" on the keyboard or drawing "L" on the handwriting system will cause the same "open file message" to be sent from the menu system to the front-end.

Brendan wrote:
I expect you to anticipate my reply and try your best to find case/s that are irrefutable (including attempting to find work-arounds to any problems yourself).


I don't think you've found a case that is irrefutable. If anything, I'd be tempted to say you've done bizarre things (like not having menus for the handwriting system); in addition to relying on assumptions (like "the application would have to say "these are my available commands and these are the input modes that I use" to the OS") that aren't necessary (and can't work for other reasons - like proper internationalisation); specifically to create problems that should not have existed.

Try again.


Cheers,

Brendan

_________________
For all things; perfection is, and will always remain, impossible to achieve in practice. However; by striving for perfection we create things that are as perfect as practically possible. Let the pursuit of perfection be our guide.


Top
 Profile  
 
 Post subject: Re: Dodgy EDIDs (was: What does your OS look like?)
PostPosted: Tue Feb 23, 2016 4:21 am 
Offline
Member
Member
User avatar

Joined: Wed Oct 18, 2006 3:45 am
Posts: 9301
Location: On the balcony, where I can actually keep 1½m distance
Brendan wrote:
onlyonemac wrote:
Do you think a mouse requires a radically different user interface to a handwriting recognition system? Yes, it does, but you're trying to group those all into the same set of events, and trying to map all input devices to all of the events even if those devices can't reasonably be mapped to those events.


It does? Why? It sounds like a massive lie to me.
Because you direct a pen with the precision of your thumb and index finger. You direct a mouse with the arm and wrist. You can't even use the index finger to steer a mouse because there's a button there.

You can't expect to match mouse-handwriting with pen/stylus handwriting.

_________________
"Certainly avoid yourself. He is a newbie and might not realize it. You'll hate his code deeply a few years down the road." - Sortie
[ My OS ] [ VDisk/SFS ]


Top
 Profile  
 
 Post subject: Re: Dodgy EDIDs (was: What does your OS look like?)
PostPosted: Tue Feb 23, 2016 5:03 am 
Offline
Member
Member

Joined: Mon Jan 03, 2011 6:58 pm
Posts: 283
Combuster wrote:
Brendan wrote:
onlyonemac wrote:
Do you think a mouse requires a radically different user interface to a handwriting recognition system? Yes, it does, but you're trying to group those all into the same set of events, and trying to map all input devices to all of the events even if those devices can't reasonably be mapped to those events.


It does? Why? It sounds like a massive lie to me.
Because you direct a pen with the precision of your thumb and index finger. You direct a mouse with the arm and wrist. You can't even use the index finger to steer a mouse because there's a button there.

You can't expect to match mouse-handwriting with pen/stylus handwriting.


I think the disconnect here is that saying a handwriting recognition system is like a mouse. Realistically (in a system like Brendan is suggesting) it would be like half a mouse + half a keyboard. If you "write" an 'a' a keyboard-like event is fired, if you click a visible menu item, a mouse-like click event is fired.

I see no problem with a system done like that (outside of the obvious time cost associated with doing it well, a cost the Brendan has repeatedly saying he is investing in)

- Monk


Top
 Profile  
 
 Post subject: Re: Dodgy EDIDs (was: What does your OS look like?)
PostPosted: Tue Feb 23, 2016 7:16 am 
Offline
Member
Member

Joined: Sat Mar 01, 2014 2:59 pm
Posts: 1146
Brendan wrote:
For your hand-writing system there's no way for the user to discover which commands do what (which is hideously broken), and a menu system is required for that purpose. The menu system (that is required for both keyboard and handwriting, even if its just so the user can discover commands and/or keyboard shortcuts) would do what most/all menu systems do and have things like "Open File", "Save", etc; where the underscore tells the user the keyboard shortcut or command. Essentially, you see "Open File" in the menu and know that "alt + O" is the keyboard shortcut to open a file, and know that writing "O" in the handwriting system is the command to open a file. When a user presses "alt + Z" the keyboard driver sends the "z command" without knowing/caring if the front end supports the command. When the user draws "Z" the handwriting system sends the "z command" without knowing/caring if the front end supports the command.
I was going to provide a "help" command which lists the available commands, or a command like "file" which would list the commands that would be on the file menu in a keyboard/mouse-driven interface. With your system, how will the system distinguish between the user writing a "Z" to simulate "alt-z" and the user writing a "Z" to enter the letter "Z" into their document?
Brendan wrote:
Note that the menu system is part of a widget service (and implemented in a separate process). When the menu system is started it loads a file describing the menus for the current locale (for internationalisation) - e.g. for an English user it might load the file "yourapp/menu.en" and for a Russian user it might load the file "yourapp/menu.ru". The front end just forwards "user commands" (whether they're key presses or drawn by handwriting) to the menu system, and when the menu system is open the front end forwards mouse clicks and "handwriting system in touchpad mode double taps" (which are the same event as mouse clicks) to the menu system. The menu system sends back whatever the file (for the user's locale) says to send back. For English, for the "S command" (from the "alt+S" keyboard shortcut or from drawing "S") the file might say to send back a "save file message", and clicking on "Save File" in the menu (with either mouse or "handwriting system in touchpad mode") would also cause the menu system to send back the exact same "save file message". For Russian, for the "C" command the file might say to send back a "save file message", and clicking on "сохранить файл" in the menu (with either mouse or "handwriting system in touchpad mode") would also cause the menu system to send back the same "save file message". The front end just receives the "save file message" that was defined/created by the front-end's own developer (e.g. using an "enum"). The front end doesn't care what the input devices are, and also doesn't care what the current locale is (and doesn't care that the Russian word for "save" is "сохранить" where it makes no sense at all to use "S" for Russian).
I considered such a system myself, but not every application uses menus and those that do often use more than menus, so how are you going to adapt the rest of the interface in a similar manner, or are you going to restrict application developers to having only menus for their interface?
Brendan wrote:
and when the menu system is open the front end forwards mouse clicks and "handwriting system in touchpad mode double taps" (which are the same event as mouse clicks) to the menu system.
Why are we navigating menus with a handwriting system running in a freaking touchpad mode??? How do you think that's efficient?
Brendan wrote:
Of course internationalisation for everything else uses a similar scheme based on one file per locale/language. You don't print "Hello world", you ask a service to find "string #1 for the user's locale/language" and print that. This means that front-ends can be translated to different languages just by editing files; which is important because very few programmers know every language.
We have this already - take a look at Android's "strings.xml" system.
Brendan wrote:
It also means that (with a suitable search order) the end user can override these files. E.g. I could create a "home/bcos/yourapp/menu.en" file (and a "home/bcos/yourapp/strings.en" file) that would be used instead of the files that came with the front end, and I could completely change the menu system to suit myself; and if I want to change "Open File" to "Load File" then that's perfectly fine - pressing "alt+L" on the keyboard or drawing "L" on the handwriting system will cause the same "open file message" to be sent from the menu system to the front-end.
GTK under Linux is supposed to implement a similar customisable menu system, although for some reason very few applications use it. But again, how do you propose extending the interface beyond a menubar?
Brendan wrote:
in addition to relying on assumptions (like "the application would have to say "these are my available commands and these are the input modes that I use" to the OS") that aren't necessary (and can't work for other reasons - like proper internationalisation); specifically to create problems that should not have existed.
I am fascinated at your inability to understand English: having explained how your interface subsystem is mostly built on the principle of "here are my available commands" (in your case, given in a file), you now accuse my "assumption" of the application saying "here are my available commands" as unnecessary and creating problems? After explaining how you would support i18n, you now accuse my "assumption" (which, as aforementioned, is basically what you spent the last two paragraphs explaining) of inhibiting proper i18n? So that's this: when I make a suggestion, you accuse it of being broken in some way without properly considering how it would work, but when you make the same suggestion, you find solutions to all the problems that you've accused me of ignoring?

_________________
When you start writing an OS you do the minimum possible to get the x86 processor in a usable state, then you try to get as far away from it as possible.

Syntax checkup:
Wrong: OS's, IRQ's, zero'ing
Right: OSes, IRQs, zeroing


Top
 Profile  
 
 Post subject: Re: Dodgy EDIDs (was: What does your OS look like?)
PostPosted: Tue Feb 23, 2016 8:26 am 
Offline
Member
Member
User avatar

Joined: Sat Jan 15, 2005 12:00 am
Posts: 8561
Location: At his keyboard!
Hi,

onlyonemac wrote:
Brendan wrote:
For your hand-writing system there's no way for the user to discover which commands do what (which is hideously broken), and a menu system is required for that purpose. The menu system (that is required for both keyboard and handwriting, even if its just so the user can discover commands and/or keyboard shortcuts) would do what most/all menu systems do and have things like "Open File", "Save", etc; where the underscore tells the user the keyboard shortcut or command. Essentially, you see "Open File" in the menu and know that "alt + O" is the keyboard shortcut to open a file, and know that writing "O" in the handwriting system is the command to open a file. When a user presses "alt + Z" the keyboard driver sends the "z command" without knowing/caring if the front end supports the command. When the user draws "Z" the handwriting system sends the "z command" without knowing/caring if the front end supports the command.
I was going to provide a "help" command which lists the available commands, or a command like "file" which would list the commands that would be on the file menu in a keyboard/mouse-driven interface. With your system, how will the system distinguish between the user writing a "Z" to simulate "alt-z" and the user writing a "Z" to enter the letter "Z" into their document?


Handwriting recognition has inherent race conditions. For a simple example, you can start writing "took" and (if nothing is done to prevent it) it'll see "l' then "lo" then "loo" then "look" then "took" (because people dot the i's and cross the t's last). It's also extremely hard to recognise handwriting (e.g. often I'll scribble a quick note and not be able to read my own writing; and because of this I tend to write in capital letters for anything more important than a quick note). To cope with this you need some complex logic relying on a dictionary - it's a lot of processing time, and not something you want to be doing constantly while the user is writing. You need to wait until after the user has written before you start trying to recognise. This includes waiting until after the user has written "O" before trying to recognise it.

onlyonemac wrote:
Brendan wrote:
Note that the menu system is part of a widget service (and implemented in a separate process). When the menu system is started it loads a file describing the menus for the current locale (for internationalisation) - e.g. for an English user it might load the file "yourapp/menu.en" and for a Russian user it might load the file "yourapp/menu.ru". The front end just forwards "user commands" (whether they're key presses or drawn by handwriting) to the menu system, and when the menu system is open the front end forwards mouse clicks and "handwriting system in touchpad mode double taps" (which are the same event as mouse clicks) to the menu system. The menu system sends back whatever the file (for the user's locale) says to send back. For English, for the "S command" (from the "alt+S" keyboard shortcut or from drawing "S") the file might say to send back a "save file message", and clicking on "Save File" in the menu (with either mouse or "handwriting system in touchpad mode") would also cause the menu system to send back the exact same "save file message". For Russian, for the "C" command the file might say to send back a "save file message", and clicking on "сохранить файл" in the menu (with either mouse or "handwriting system in touchpad mode") would also cause the menu system to send back the same "save file message". The front end just receives the "save file message" that was defined/created by the front-end's own developer (e.g. using an "enum"). The front end doesn't care what the input devices are, and also doesn't care what the current locale is (and doesn't care that the Russian word for "save" is "сохранить" where it makes no sense at all to use "S" for Russian).
I considered such a system myself, but not every application uses menus and those that do often use more than menus, so how are you going to adapt the rest of the interface in a similar manner, or are you going to restrict application developers to having only menus for their interface?


I'm currently waiting for your next attempt at showing anything where the front-end needs to be different for different input devices. Rather than useless vague hand-waving like "not every application uses menus and those that do often use more than menus", describe an application that uses more than menus.

onlyonemac wrote:
Brendan wrote:
and when the menu system is open the front end forwards mouse clicks and "handwriting system in touchpad mode double taps" (which are the same event as mouse clicks) to the menu system.
Why are we navigating menus with a handwriting system running in a freaking touchpad mode??? How do you think that's efficient?


Why are you assuming this is the method that the user chose to use (instead of choosing to use the "up/down/left/right" buttons or choosing to use commands or choosing anything else)? Why are you assuming the user chose wrong?

Note that for exploring the menu system (discoverability) the touchpad mode would be extremely efficient - the user would essentially draw a vertical line down the menu and watch all the sub-menu's pop out as they go past, and see everything in seconds.

onlyonemac wrote:
Brendan wrote:
Of course internationalisation for everything else uses a similar scheme based on one file per locale/language. You don't print "Hello world", you ask a service to find "string #1 for the user's locale/language" and print that. This means that front-ends can be translated to different languages just by editing files; which is important because very few programmers know every language.
We have this already - take a look at Android's "strings.xml" system.


Virtually all applications (on all OSs) use this system or something like it. I'm not saying it's new. I'm only pointing out that your previous description of how a word processor might work was completely broken because it fails to handle internationalisation.

onlyonemac wrote:
Brendan wrote:
It also means that (with a suitable search order) the end user can override these files. E.g. I could create a "home/bcos/yourapp/menu.en" file (and a "home/bcos/yourapp/strings.en" file) that would be used instead of the files that came with the front end, and I could completely change the menu system to suit myself; and if I want to change "Open File" to "Load File" then that's perfectly fine - pressing "alt+L" on the keyboard or drawing "L" on the handwriting system will cause the same "open file message" to be sent from the menu system to the front-end.
GTK under Linux is supposed to implement a similar customisable menu system, although for some reason very few applications use it. But again, how do you propose extending the interface beyond a menubar?


I'm currently waiting for your next attempt at showing anything where the front-end needs to be different for different input devices. Describe an application that needs "an interface beyond a menubar" if you like.

onlyonemac wrote:
Brendan wrote:
in addition to relying on assumptions (like "the application would have to say "these are my available commands and these are the input modes that I use" to the OS") that aren't necessary (and can't work for other reasons - like proper internationalisation); specifically to create problems that should not have existed.
I am fascinated at your inability to understand English: having explained how your interface subsystem is mostly built on the principle of "here are my available commands" (in your case, given in a file), you now accuse my "assumption" of the application saying "here are my available commands" as unnecessary and creating problems?


Your description relied on the input device driver knowing what the application's commands were (which is extremely flawed for many reasons). This is very different to a menu/widget service (but not the input device drivers) knowing the application's commands. The menu/widget service is not an input device driver.

onlyonemac wrote:
After explaining how you would support i18n, you now accuse my "assumption" (which, as aforementioned, is basically what you spent the last two paragraphs explaining) of inhibiting proper i18n? So that's this: when I make a suggestion, you accuse it of being broken in some way without properly considering how it would work, but when you make the same suggestion, you find solutions to all the problems that you've accused me of ignoring?


You're attempting to rewrite history. Go back and read your previous reply; especially the part that said (underlining mine):

onlyonemac wrote:
Note that most of these differences may be abstracted by the OS and/or the device drivers, but in such a setup the application would have to say "these are my available commands and these are the input modes that I use" to the OS and then the OS determines the best way to present the interface to the user (e.g. if there's a mouse present it can display a set of menus, if there's a keyboard present it can allow the use of keyboard shortcuts, if there's a handwriting recognition system present it can allow the user to write the name of the command in a "command" mode).


Essentially; I was only saying that the application wouldn't tell the OS which commands are available (the input device drivers would send any command, available or not, without being told which commands the application accepts).


Cheers,

Brendan

_________________
For all things; perfection is, and will always remain, impossible to achieve in practice. However; by striving for perfection we create things that are as perfect as practically possible. Let the pursuit of perfection be our guide.


Top
 Profile  
 
 Post subject: Re: Dodgy EDIDs (was: What does your OS look like?)
PostPosted: Tue Feb 23, 2016 12:29 pm 
Offline
Member
Member

Joined: Sat Mar 01, 2014 2:59 pm
Posts: 1146
Brendan wrote:
Handwriting recognition has inherent race conditions. For a simple example, you can start writing "took" and (if nothing is done to prevent it) it'll see "l' then "lo" then "loo" then "look" then "took" (because people dot the i's and cross the t's last). It's also extremely hard to recognise handwriting (e.g. often I'll scribble a quick note and not be able to read my own writing; and because of this I tend to write in capital letters for anything more important than a quick note). To cope with this you need some complex logic relying on a dictionary - it's a lot of processing time, and not something you want to be doing constantly while the user is writing. You need to wait until after the user has written before you start trying to recognise. This includes waiting until after the user has written "O" before trying to recognise it.
So what if I want to insert a single letter "O" into my document? You still haven't explained that part (which was the original question, by the way).
Brendan wrote:
onlyonemac wrote:
Brendan wrote:
and when the menu system is open the front end forwards mouse clicks and "handwriting system in touchpad mode double taps" (which are the same event as mouse clicks) to the menu system.
Why are we navigating menus with a handwriting system running in a freaking touchpad mode??? How do you think that's efficient?


Why are you assuming this is the method that the user chose to use (instead of choosing to use the "up/down/left/right" buttons or choosing to use commands or choosing anything else)? Why are you assuming the user chose wrong?

Note that for exploring the menu system (discoverability) the touchpad mode would be extremely efficient - the user would essentially draw a vertical line down the menu and watch all the sub-menu's pop out as they go past, and see everything in seconds.
Fine - then implement that as well. But from what you're saying, it sounds (excuse me if I'm wrong) like that's the only menu navigation style that you're implementing for the handwriting recognition device.
Brendan wrote:
I'm only pointing out that your previous description of how a word processor might work was completely broken because it fails to handle internationalisation.
Why can't the application just provide the commands in the form of "command number 23 listed in file /applications/my-word-processor/commands.<two-letter-language-code>"? I never said that the application gives the commands to the OS in a locale-specific way.
Brendan wrote:
Your description relied on the input device driver knowing what the application's commands were (which is extremely flawed for many reasons). This is very different to a menu/widget service (but not the input device drivers) knowing the application's commands. The menu/widget service is not an input device driver.
Brendan wrote:
Essentially; I was only saying that the application wouldn't tell the OS which commands are available (the input device drivers would send any command, available or not, without being told which commands the application accepts).
Whatever; that's an implementation detail that doesn't really matter for the purpose of this discussion.
Brendan wrote:
Rather than useless vague hand-waving like "not every application uses menus and those that do often use more than menus", describe an application that uses more than menus.
Brendan wrote:
I'm currently waiting for your next attempt at showing anything where the front-end needs to be different for different input devices. Describe an application that needs "an interface beyond a menubar" if you like.
Alright, what about this one: a graphics editor like Photoshop or GIMP. This needs:
  • a toolbox (otherwise the application is very inefficient to use)
  • colour spinners/sliders (otherwise choosing colours would be difficult)
  • a drawing canvas
Or an audio/video audio editor like Audacity, Adobe Premiere or Cinelerra. This needs:
  • a timeline to display the position of the audio/video tracks and move them around
  • a transport bar (to start, stop, and rewind the playback, otherwise the application is very inefficient to use)
  • a preview pane (in the case of a video editor)
Even something common like most graphical file managers these days feature sidebars, trees, shortcut bars, and other such interface elements which while not strictly necessary (all of the functionality is - or should be - available through the menubar as well) are important to have unless you're willing to make your (keyboard/mouse) users jump through hoops to perform simple tasks (just so that your touchscreen/microphone/switch-access users can use the same menu interface even if that's not the most efficient interface for their input devices).

_________________
When you start writing an OS you do the minimum possible to get the x86 processor in a usable state, then you try to get as far away from it as possible.

Syntax checkup:
Wrong: OS's, IRQ's, zero'ing
Right: OSes, IRQs, zeroing


Top
 Profile  
 
 Post subject: Re: Dodgy EDIDs (was: What does your OS look like?)
PostPosted: Tue Feb 23, 2016 12:52 pm 
Offline
Member
Member
User avatar

Joined: Wed Jan 06, 2010 7:07 pm
Posts: 792
onlyonemac wrote:
Brendan wrote:
Rather than useless vague hand-waving like "not every application uses menus and those that do often use more than menus", describe an application that uses more than menus.
Brendan wrote:
I'm currently waiting for your next attempt at showing anything where the front-end needs to be different for different input devices. Describe an application that needs "an interface beyond a menubar" if you like.
Alright, what about this one: a graphics editor like Photoshop or GIMP. This needs:
  • a toolbox (otherwise the application is very inefficient to use)
  • colour spinners/sliders (otherwise choosing colours would be difficult)
  • a drawing canvas
Or an audio/video audio editor like Audacity, Adobe Premiere or Cinelerra. This needs:
  • a timeline to display the position of the audio/video tracks and move them around
  • a transport bar (to start, stop, and rewind the playback, otherwise the application is very inefficient to use)
  • a preview pane (in the case of a video editor)
Even something common like most graphical file managers these days feature sidebars, trees, shortcut bars, and other such interface elements which while not strictly necessary (all of the functionality is - or should be - available through the menubar as well) are important to have unless you're willing to make your (keyboard/mouse) users jump through hoops to perform simple tasks (just so that your touchscreen/microphone/switch-access users can use the same menu interface even if that's not the most efficient interface for their input devices).
An even crazier example is a DAW- beyond just a canvas or a timeline, now we have an absurd number of types of highly specialized inputs, that work with mouse, keyboard, touch screen, and multiple types of one-off and often vendor-specific hardware.

_________________
[www.abubalay.com]


Top
 Profile  
 
 Post subject: Re: Dodgy EDIDs (was: What does your OS look like?)
PostPosted: Tue Feb 23, 2016 8:40 pm 
Offline
Member
Member
User avatar

Joined: Sat Jan 15, 2005 12:00 am
Posts: 8561
Location: At his keyboard!
Hi,

onlyonemac wrote:
Brendan wrote:
Handwriting recognition has inherent race conditions. For a simple example, you can start writing "took" and (if nothing is done to prevent it) it'll see "l' then "lo" then "loo" then "look" then "took" (because people dot the i's and cross the t's last). It's also extremely hard to recognise handwriting (e.g. often I'll scribble a quick note and not be able to read my own writing; and because of this I tend to write in capital letters for anything more important than a quick note). To cope with this you need some complex logic relying on a dictionary - it's a lot of processing time, and not something you want to be doing constantly while the user is writing. You need to wait until after the user has written before you start trying to recognise. This includes waiting until after the user has written "O" before trying to recognise it.
So what if I want to insert a single letter "O" into my document? You still haven't explained that part (which was the original question, by the way).


How is it not obvious? If you want "O" you write "O", and if you want "O" (with an underline to indicate it's a command) you write "O" (with an underline to indicate it's a command), and (when the handwriting recognition system knows you're finished writing) the handwriting recognition system recognises your handwriting.

onlyonemac wrote:
Brendan wrote:
onlyonemac wrote:
Why are we navigating menus with a handwriting system running in a freaking touchpad mode??? How do you think that's efficient?


Why are you assuming this is the method that the user chose to use (instead of choosing to use the "up/down/left/right" buttons or choosing to use commands or choosing anything else)? Why are you assuming the user chose wrong?

Note that for exploring the menu system (discoverability) the touchpad mode would be extremely efficient - the user would essentially draw a vertical line down the menu and watch all the sub-menu's pop out as they go past, and see everything in seconds.
Fine - then implement that as well. But from what you're saying, it sounds (excuse me if I'm wrong) like that's the only menu navigation style that you're implementing for the handwriting recognition device.


I mentioned both "touch pad mode" and "buttons and/or scroll bars around the edges of the hand-writing system that can be used without a special mode (including buttons for up/down/left/right events)" in the very first paragraph of my original reply.

onlyonemac wrote:
Brendan wrote:
I'm only pointing out that your previous description of how a word processor might work was completely broken because it fails to handle internationalisation.
Why can't the application just provide the commands in the form of "command number 23 listed in file /applications/my-word-processor/commands.<two-letter-language-code>"? I never said that the application gives the commands to the OS in a locale-specific way.


In theory I guess you could have a third file for internationalisation, plus some way for input device drivers to know which application is currently the active application that doesn't suffer from race conditions. It isn't what you originally described and I'd argue it's pointlessly complicated and error prone (as the file given to input device drivers would need to match the file given to the widget service).

onlyonemac wrote:
Brendan wrote:
Rather than useless vague hand-waving like "not every application uses menus and those that do often use more than menus", describe an application that uses more than menus.
Brendan wrote:
I'm currently waiting for your next attempt at showing anything where the front-end needs to be different for different input devices. Describe an application that needs "an interface beyond a menubar" if you like.
Alright, what about this one: a graphics editor like Photoshop or GIMP. This needs:
  • a toolbox (otherwise the application is very inefficient to use)
  • colour spinners/sliders (otherwise choosing colours would be difficult)
  • a drawing canvas


The only significant difference between a toolbox and a menu is that pictures are used instead of words, which means you'd need some other way to make any shortcuts/commands discoverable, like little pop-up "text bubbles" (that would be valuable to describe the picture anyway, and are common practice anyway).

I don't understand why colour spinners/sliders and the drawing canvas aren't obvious to everyone (given that the handwriting system's "touchpad mode" has been mentioned multiple times now).

onlyonemac wrote:
Or an audio/video audio editor like Audacity, Adobe Premiere or Cinelerra. This needs:
  • a timeline to display the position of the audio/video tracks and move them around
  • a transport bar (to start, stop, and rewind the playback, otherwise the application is very inefficient to use)
  • a preview pane (in the case of a video editor)
Even something common like most graphical file managers these days feature sidebars, trees, shortcut bars, and other such interface elements which while not strictly necessary (all of the functionality is - or should be - available through the menubar as well) are important to have unless you're willing to make your (keyboard/mouse) users jump through hoops to perform simple tasks (just so that your touchscreen/microphone/switch-access users can use the same menu interface even if that's not the most efficient interface for their input devices).


Have you ever played Chess? It's a game that everyone should be taught to play properly (and not just taught to play).

People that know how to play Chess properly consider one possible move, then think about their opponent's possible responses to that move, then think about the possible responses to their opponent's possible responses, and so on. Then they'll consider another possible move in the same way; then another. Eventually they'll chose a move that has the highest probability of being beneficial multiple moves later. Because a good player is thinking multiple moves ahead they can easily beat a player who isn't.

The concept of trying to predict an opponent's response is valuable in a wide variety of situations. For example, in a forum discussion you could use this skill to avoid wasting my time with trivial, obvious and pointless things, like "what if there's a transport bar?!" .

Do you remember me saying "I expect you to anticipate my reply and try your best to find case/s that are irrefutable (including attempting to find work-arounds to any problems yourself)."? The reason I said this should be very clear to you now - I don't want you to waste my time with "trivial to refute" nonsense simple because it's easy for you to mention every single little thing that pops into your head (with no thought whatsoever) from now until the end of time.

Basically; I wrote "I expect you to anticipate", because 3 days ago I anticipated a high probability of "barrage of pointlessness". This is also the reason I wrote "Choose the application and devices wisely." and "Assume that you have one and only one chance". Sadly, it only worked briefly.

I would like to pretend that in an attempt to anticipate your responses I tried to think of cases where front-ends need different behaviour for different input devices (and found none); unfortunately this isn't true - I attempted to find cases where front-ends need different behaviour for different input devices (and found none) when designing the nature of the system, long before you started questioning it. However this doesn't change the fact that I have been predicting that you will never find any irrefutable case where front-ends need different behaviour for different input devices; and doesn't change the fact that the reason I asked you to find such a case is because I know you will fail to do so.

My strategy was (and still is) to "short-circuit" the loop - have you thinking of cases where front-ends might need different behaviour for different input devices, but then anticipating my responses and realising (one by one) that each case is not irrefutable before you post it; and trying again; until you eventually realise that it's not possible by yourself without posting anything.


Cheers,

Brendan

_________________
For all things; perfection is, and will always remain, impossible to achieve in practice. However; by striving for perfection we create things that are as perfect as practically possible. Let the pursuit of perfection be our guide.


Top
 Profile  
 
 Post subject: Re: Dodgy EDIDs (was: What does your OS look like?)
PostPosted: Wed Feb 24, 2016 7:13 am 
Offline
Member
Member

Joined: Sat Mar 01, 2014 2:59 pm
Posts: 1146
Brendan wrote:
onlyonemac wrote:
So what if I want to insert a single letter "O" into my document? You still haven't explained that part (which was the original question, by the way).


How is it not obvious? If you want "O" you write "O", and if you want "O" (with an underline to indicate it's a command) you write "O" (with an underline to indicate it's a command), and (when the handwriting recognition system knows you're finished writing) the handwriting recognition system recognises your handwriting.
You never said anything about writing the letter with an underline; I thought you were just writing the letter on its own. I'm still not sure that this is the most efficient method though, particularly when you have to wait half a second between letters while the handwriting system decides if you're finished writing whereas if you wrote the command name out in full you would only have to wait once and if you're a regular user you'll probably be able to write it in under half a second.
Brendan wrote:
I mentioned both "touch pad mode" and "buttons and/or scroll bars around the edges of the hand-writing system that can be used without a special mode (including buttons for up/down/left/right events)" in the very first paragraph of my original reply.
Why don't we think about giving the handwriting recognition user an interface that's optimised for a handwriting recognition system, rather than making them navigate a mouse-optimised menu system with navigation buttons?
Brendan wrote:
In theory I guess you could have a third file for internationalisation, plus some way for input device drivers to know which application is currently the active application that doesn't suffer from race conditions. It isn't what you originally described and I'd argue it's pointlessly complicated and error prone (as the file given to input device drivers would need to match the file given to the widget service).
IIRC I originally described it with an "and/or"-type statement between my mention of "OS" and "device drivers", meaning basically "or whatever it's appropriate for the application to give the menu information to according to the architecture of your OS/drivers".
Brendan wrote:
The only significant difference between a toolbox and a menu is that pictures are used instead of words [...]
Actually that's not the only significant difference; a more significant difference is that a menubar lays commands out in multiple levels of menus and submenus whereas a toolbox displays a selection of common commands in a flat view.
Brendan wrote:
I don't understand why colour spinners/sliders and the drawing canvas aren't obvious to everyone (given that the handwriting system's "touchpad mode" has been mentioned multiple times now).
And I don't understand why you still think it counts as implementing a handwriting recognition-driven interface when you're just making the handwriting recognition device work like a mouse for everything except actual typing (and possibly keyboard shortcuts).
Brendan wrote:
Basically; I wrote "I expect you to anticipate", because 3 days ago I anticipated a high probability of "barrage of pointlessness". This is also the reason I wrote "Choose the application and devices wisely." and "Assume that you have one and only one chance". Sadly, it only worked briefly.
You can't expect me to reply to all of your posts with that amount of thought and enthusiasm when you repeatedly refuse to listen to multiple posters here who have hinted that your design is very weak for the reasons that I have described and explained multiple times.

_________________
When you start writing an OS you do the minimum possible to get the x86 processor in a usable state, then you try to get as far away from it as possible.

Syntax checkup:
Wrong: OS's, IRQ's, zero'ing
Right: OSes, IRQs, zeroing


Top
 Profile  
 
 Post subject: Re: Dodgy EDIDs (was: What does your OS look like?)
PostPosted: Wed Feb 24, 2016 12:10 pm 
Offline
Member
Member
User avatar

Joined: Sat Jan 15, 2005 12:00 am
Posts: 8561
Location: At his keyboard!
Hi,

onlyonemac wrote:
Brendan wrote:
onlyonemac wrote:
So what if I want to insert a single letter "O" into my document? You still haven't explained that part (which was the original question, by the way).


How is it not obvious? If you want "O" you write "O", and if you want "O" (with an underline to indicate it's a command) you write "O" (with an underline to indicate it's a command), and (when the handwriting recognition system knows you're finished writing) the handwriting recognition system recognises your handwriting.
You never said anything about writing the letter with an underline; I thought you were just writing the letter on its own.


I consistently put underlining under each "O" as appropriate. I suspect your screen reader doesn't tell you when something is underlined.

onlyonemac wrote:
I'm still not sure that this is the most efficient method though, particularly when you have to wait half a second between letters while the handwriting system decides if you're finished writing whereas if you wrote the command name out in full you would only have to wait once and if you're a regular user you'll probably be able to write it in under half a second.


Why would "open file" (and waiting once) take less time to write than "O" (and waiting once)?

onlyonemac wrote:
Brendan wrote:
I mentioned both "touch pad mode" and "buttons and/or scroll bars around the edges of the hand-writing system that can be used without a special mode (including buttons for up/down/left/right events)" in the very first paragraph of my original reply.
Why don't we think about giving the handwriting recognition user an interface that's optimised for a handwriting recognition system, rather than making them navigate a mouse-optimised menu system with navigation buttons?


Why would I bother explaining that the "touch pad mode" is mostly only for discovering the menus (and the shortcuts/commands) all over again, when it's obvious that you are trying your hardest to ignore everything I've said in a futile attempt at proving that you're unable to do anything more than regurgitate the same unfounded falsehoods that you've been spouting for weeks already?

onlyonemac wrote:
Brendan wrote:
The only significant difference between a toolbox and a menu is that pictures are used instead of words [...]
Actually that's not the only significant difference; a more significant difference is that a menubar lays commands out in multiple levels of menus and submenus whereas a toolbox displays a selection of common commands in a flat view.


I already discounted that difference as insignificant (although I don't see why a toolbar couldn't have "sub-toolbars" either).

onlyonemac wrote:
Brendan wrote:
I don't understand why colour spinners/sliders and the drawing canvas aren't obvious to everyone (given that the handwriting system's "touchpad mode" has been mentioned multiple times now).
And I don't understand why you still think it counts as implementing a handwriting recognition-driven interface when you're just making the handwriting recognition device work like a mouse for everything except actual typing (and possibly keyboard shortcuts).


How exactly do you expect handwriting would work for something like drawing on a canvas? Make the poor user write "select the 123rd pixel from the left and change it to green" by hand, for every single pixel they want changed, simply because you don't want to admit that "touchpad mode" is far more efficient?

onlyonemac wrote:
Brendan wrote:
Basically; I wrote "I expect you to anticipate", because 3 days ago I anticipated a high probability of "barrage of pointlessness". This is also the reason I wrote "Choose the application and devices wisely." and "Assume that you have one and only one chance". Sadly, it only worked briefly.
You can't expect me to reply to all of your posts with that amount of thought and enthusiasm when you repeatedly refuse to listen to multiple posters here who have hinted that your design is very weak for the reasons that I have described and explained multiple times.


I should be able to expect you to reply with at least some thought. Unfortunately you don't. You still haven't described or explained anything; you just continually repeat the same opinion without ever backing it up with anything other than a never ending stream of nonsense and distractions. Even when I try to force you to provide an irrefutable example of a front end that needs to care what the input device is you still fail to provide any example that isn't trivially refuted. Even when I try to force you to think about what you're saying ("anticipate my response") you still fail to show any indication of any actual thought.

Note that there are no other posters (that I'm aware of) who think that a front-end needs to care what the input device is.


Cheers,

Brendan

_________________
For all things; perfection is, and will always remain, impossible to achieve in practice. However; by striving for perfection we create things that are as perfect as practically possible. Let the pursuit of perfection be our guide.


Top
 Profile  
 
 Post subject: Re: Dodgy EDIDs (was: What does your OS look like?)
PostPosted: Wed Feb 24, 2016 12:50 pm 
Offline
Member
Member
User avatar

Joined: Tue Mar 06, 2007 11:17 am
Posts: 1225
I'm pretty sure that text recognition could be made much more precise if paper was replaced by a high-precision touchscreen (as precise as writing in a sheet of paper, probably like a tablet of the highest quality and resolution).

Then we could save the very sequence of the strokes. We could then use a big database containing thousands or millions of sample sequences for all known characters. Then we could also have a calibration screen where the user could be asked to type different characters one or more times (or that the users could select or even add by themselves). Then the program could draw back a version of the character that resembles or equals or averages the user's character so that the user can see if it's enough recognition or if it needs more tries. That could be done with A.I., neural networks, machine learning and related things.

But the fact of actually storing the very drawing sequence of the character strokes would do text recognition much more effective.

Now, recognizing letters that come from scanned documents, specially handwritten, is a different topic since we don't have the sequence of the text strokes, just the final text.

_________________
Live PC 1: Image Live PC 2: Image

YouTube:
http://youtube.com/@AltComp126/streams
http://youtube.com/@proyectos/streams

http://master.dl.sourceforge.net/projec ... 7z?viasf=1


Top
 Profile  
 
 Post subject: Re: Dodgy EDIDs (was: What does your OS look like?)
PostPosted: Wed Feb 24, 2016 2:21 pm 
Offline
Member
Member

Joined: Sat Mar 01, 2014 2:59 pm
Posts: 1146
Brendan wrote:
onlyonemac wrote:
You never said anything about writing the letter with an underline; I thought you were just writing the letter on its own.


I consistently put underlining under each "O" as appropriate. I suspect your screen reader doesn't tell you when something is underlined.
No, it doesn't, and not because it "sucks" but because I don't want to be slowed down by listening to formatting information that usually doesn't matter so I configured it to not read that information out unless I explicitly ask it to (but I didn't ask it to, because you didn't say anything that suggested that I should ask it to).
Brendan wrote:
onlyonemac wrote:
I'm still not sure that this is the most efficient method though, particularly when you have to wait half a second between letters while the handwriting system decides if you're finished writing whereas if you wrote the command name out in full you would only have to wait once and if you're a regular user you'll probably be able to write it in under half a second.


Why would "open file" (and waiting once) take less time to write than "O" (and waiting once)?
Because you don't just write "O" and wait once; you write "F", wait for the menu to open, then write "O"; "alt-o" in LibreOffice Writer opens the "Format" menu.

Also, what if I want to enter underlined text (in a natural way, *without* having to use the formatting menus designed for a keyboard-and-mouse interface)?
Brendan wrote:
Why would I bother explaining that the "touch pad mode" is mostly only for discovering the menus (and the shortcuts/commands) all over again [...]
You didn't explain that it is mostly for *discovering* menus; you explained it in response to my suggestion to having the user write out the command name, implying that it is how you intend users to use the menus all the time. Then when I explained that I would have a "help" or "list" command (or user documentation) to give the available commands, combined with the possibility of intelligently substituting or suggesting similar commands when the given command is invalid, you discarded it despite it being as valid (if not better) an approach as your approach of navigation commands/buttons.
Brendan wrote:
(although I don't see why a toolbar couldn't have "sub-toolbars" either)
We're talking about a toolbox, not a toolbar - they're different things and are used in different situations. Furthermore, enforcing the use of submenus/subtoolbars/sub-anything-elses for frequently-used commands which users will want to access quickly is a bad design principle that will reduce efficiency for all users. Perhaps you should do some research on current interface design (and existing studies on user behaviour patterns and experiences) before you start trying to design your own interface.
Brendan wrote:
How exactly do you expect handwriting would work for something like drawing on a canvas? Make the poor user write "select the 123rd pixel from the left and change it to green" by hand, for every single pixel they want changed, simply because you don't want to admit that "touchpad mode" is far more efficient?
Don't use a handwriting device for drawing, end of story. Use a pointing device (mouse, graphics tablet, etc.) that's designed for graphical work, and don't try to make a completely separate class of input device try to perform the same functions.
Brendan wrote:
You still haven't described or explained anything; you just continually repeat the same opinion without ever backing it up with anything other than a never ending stream of nonsense and distractions.
Perhaps if you actually read what I said and considered it then you wouldn't consider it a "stream of nonsense and distractions". If this thread is distracting you from writing your operating system then how about you either
  • listen to what I say
  • or admit that you will never listen and just stop posting
?
Brendan wrote:
Even when I try to force you to provide an irrefutable example of a front end that needs to care what the input device is you still fail to provide any example that isn't trivially refuted.
Perhaps if you stopped giving invalid refutations (e.g. using a handwriting recognition device as a touchpad, or making handwriting recognition users navigate menus like they're using a keyboard) for my examples then you wouldn't consider my examples "trivially refuted".
Brendan wrote:
Note that there are no other posters (that I'm aware of) who think that a front-end needs to care what the input device is.
Someone mentioned a few days ago an important difference between a handwriting recognition system and a mouse/touchpad, which happened to be a difference that I had not thought of but which is true nevertheless. Maybe you would like to go back and find what they said.

_________________
When you start writing an OS you do the minimum possible to get the x86 processor in a usable state, then you try to get as far away from it as possible.

Syntax checkup:
Wrong: OS's, IRQ's, zero'ing
Right: OSes, IRQs, zeroing


Top
 Profile  
 
Display posts from previous:  Sort by  
Post new topic Reply to topic  [ 294 posts ]  Go to page Previous  1 ... 16, 17, 18, 19, 20  Next

All times are UTC - 6 hours


Who is online

Users browsing this forum: Bing [Bot], SemrushBot [Bot] and 17 guests


You cannot post new topics in this forum
You cannot reply to topics in this forum
You cannot edit your posts in this forum
You cannot delete your posts in this forum
You cannot post attachments in this forum

Search for:
Jump to:  
Powered by phpBB © 2000, 2002, 2005, 2007 phpBB Group