OSDev.org

The Place to Start for Operating System Developers
It is currently Mon Mar 18, 2024 8:00 pm

All times are UTC - 6 hours




Post new topic Reply to topic  [ 294 posts ]  Go to page Previous  1 ... 15, 16, 17, 18, 19, 20  Next
Author Message
 Post subject: Re: Dodgy EDIDs (was: What does your OS look like?)
PostPosted: Wed Feb 17, 2016 3:43 pm 
Offline
Member
Member

Joined: Sat Mar 01, 2014 2:59 pm
Posts: 1146
Brendan wrote:
As I've already explained; a front end receives events, and it makes no difference whatsoever if those events came from speech recognition or keyboard or anything else. Note that this has nothing at all to do with sighted vs. blind and nothing to do with mobile vs. desktop. For example, someone (e.g. a double amputee without any hands) might use speech recognition as their input device on a desktop with a "visual interface".
Good luck with trying to create a decent voice-driven interface when your OS makes no distinction between keyboard input and speech input. So what, the voice-interface user has to say things like "tab", "arrow down", "spacebar" to navigate the interface as if they were pressing the ubiquitous keyboard-navigation keys? I'm sure they'd rather give higher-level commands like "create a new paragraph" or "save document" or "what files are in that directory?" but I can't see how that's going to work if you're trying to squeeze keyboard and voice input into the same set of events.
Brendan wrote:
onlyonemac wrote:
You can't keep insisting that "audio interface designed for sighted people trapped behind steering wheels" and "audio interface designed for blind people" are the same thing, because the person in the former situation is also unable to use their hands and will be simultaneously concentrating on driving their car, whereas the person in the latter situation may well be able to use their hands while sitting comfortably in their office chair.


I can keep insisting that you are completely wrong, because you keep being completely wrong.
You cannot keep insisting that you're right and I'm wrong without providing any justification, because the absence of justification makes me think that actually you're wrong otherwise you would be able to justify why you're right.

On topic, I can't see why you're so dumb as to be unable to see the obvious difference between a mobile use case and a desktop use case. If we suppose that actually you can see the difference (which I seriously doubt), It can only mean one of two things, neither of which are acceptable:
  • You're expecting blind people to use only mobile devices, not desktop devices
  • You're not distinguishing between mobile devices and desktop devices, and more importantly the interfaces that they present to the user, and you can see how well that worked when Microsoft tried it
Either way, you're being dumb.

_________________
When you start writing an OS you do the minimum possible to get the x86 processor in a usable state, then you try to get as far away from it as possible.

Syntax checkup:
Wrong: OS's, IRQ's, zero'ing
Right: OSes, IRQs, zeroing


Top
 Profile  
 
 Post subject: Re: Dodgy EDIDs (was: What does your OS look like?)
PostPosted: Wed Feb 17, 2016 5:13 pm 
Offline
Member
Member
User avatar

Joined: Sat Jan 15, 2005 12:00 am
Posts: 8561
Location: At his keyboard!
Hi,

onlyonemac wrote:
Brendan wrote:
As I've already explained; a front end receives events, and it makes no difference whatsoever if those events came from speech recognition or keyboard or anything else. Note that this has nothing at all to do with sighted vs. blind and nothing to do with mobile vs. desktop. For example, someone (e.g. a double amputee without any hands) might use speech recognition as their input device on a desktop with a "visual interface".
Good luck with trying to create a decent voice-driven interface when your OS makes no distinction between keyboard input and speech input. So what, the voice-interface user has to say things like "tab", "arrow down", "spacebar" to navigate the interface as if they were pressing the ubiquitous keyboard-navigation keys? I'm sure they'd rather give higher-level commands like "create a new paragraph" or "save document" or "what files are in that directory?" but I can't see how that's going to work if you're trying to squeeze keyboard and voice input into the same set of events.


I'm not trying to squeeze keyboard and voice input into the same set of events; I'm trying to squeeze 40 different keyboard layouts, speech recognition for over 50 different spoken languages, 4 different types of mouse, a few different types of trackball, multiple types of joysticks (including steering wheels, etc), touchpads and touchscreens, light guns, wired gloves, motion/eye/head tracking, "Sip and puff", and anything that might exist in the future into the same set of events.

If you think it's practical to have thousands of different front ends for every single application to cover all the different cases then you're severely delusional.

There are only a relatively small number of event types (probably less than 100) - absolute coords, relative coords, literal characters, navigation ("up, down, left, right" for navigating trees/grids, setting/jumping to anchors) and some special commands (cancel/escape, menu, select, cut, copy, paste, etc). Device drivers map input to these events in whatever way makes sense for the device; which may or may not involve multiple modes (e.g. a command mode where "up" means the up event and a literal mode where "up" means the literal characters 'u' then 'p'), but may or may not just mean absolute or relative coords (and a virtual keyboard) or whatever.

onlyonemac wrote:
Brendan wrote:
I can keep insisting that you are completely wrong, because you keep being completely wrong.
You cannot keep insisting that you're right and I'm wrong without providing any justification, because the absence of justification makes me think that actually you're wrong otherwise you would be able to justify why you're right.


The problem with trying to prove that something doesn't exist (e.g. a significant difference between an interface for people that can't see and an interface for people that can't see) is that you can't point to anything and say "see, it didn't exist". Proving that something does exist is trivial because you can find anything that does exist.

You think a difference exists but you can't find one (even though it should be trivial), so instead you're expecting me to prove something doesn't exist (which is impossible).

onlyonemac wrote:
On topic, I can't see why you're so dumb as to be unable to see the obvious difference between a mobile use case and a desktop use case. If we suppose that actually you can see the difference (which I seriously doubt), It can only mean one of two things, neither of which are acceptable:
  • You're expecting blind people to use only mobile devices, not desktop devices
  • You're not distinguishing between mobile devices and desktop devices, and more importantly the interfaces that they present to the user, and you can see how well that worked when Microsoft tried it
Either way, you're being dumb.


For someone using speech recognition and video output; is that "mobile" or "desktop"? What if the video output is Google Glass?

For someone using keyboard and audio output; is that "mobile" or "desktop"? What if the input device is motion tracking or wired glove/s and the keyboard is a virtual keyboard that doesn't physically exist; and the user is wearing SCUBA gear or a parachute?

For someone using touchscreen and video output; is that "mobile" or "desktop"? Is it a smartphone user sitting at a desk? Is it someone sitting in a taxi using a laptop?

For someone using 2 joysticks and video output; is that "mobile" or "desktop"? Is it someone playing a 3D game on an aeroplane, or someone operating an excavator from their desk?


Cheers,

Brendan

_________________
For all things; perfection is, and will always remain, impossible to achieve in practice. However; by striving for perfection we create things that are as perfect as practically possible. Let the pursuit of perfection be our guide.


Top
 Profile  
 
 Post subject: Re: Dodgy EDIDs (was: What does your OS look like?)
PostPosted: Thu Feb 18, 2016 4:30 am 
Offline
Member
Member

Joined: Sat Mar 01, 2014 2:59 pm
Posts: 1146
Brendan wrote:
I'm not trying to squeeze keyboard and voice input into the same set of events; I'm trying to squeeze 40 different keyboard layouts, speech recognition for over 50 different spoken languages, 4 different types of mouse, a few different types of trackball, multiple types of joysticks (including steering wheels, etc), touchpads and touchscreens, light guns, wired gloves, motion/eye/head tracking, "Sip and puff", and anything that might exist in the future into the same set of events.

There are only a relatively small number of event types (probably less than 100) - absolute coords, relative coords, literal characters, navigation ("up, down, left, right" for navigating trees/grids, setting/jumping to anchors) and some special commands (cancel/escape, menu, select, cut, copy, paste, etc). Device drivers map input to these events in whatever way makes sense for the device; which may or may not involve multiple modes (e.g. a command mode where "up" means the up event and a literal mode where "up" means the literal characters 'u' then 'p'), but may or may not just mean absolute or relative coords (and a virtual keyboard) or whatever.
So you think that you'll be able to make a voice-driven audio interface that's good enough that sighted people will want to use it wherever they are to perform tasks that they normally perform at a desktop computer, but yet you're making them navigate menus with voice commands like "up", "down", "select", and so on? If anything, the one requirement for a voice-driven interface (whatever output form is used) is for it to accept actual commands (correlating to, for example, the commands that can be selected from menus if one is using a keyboard-driven or mouse-driven interface) otherwise it will be very clumsy to use. So here's a big redesign for your OS: instead of making the events "navigation" events from an input device, make them "command" events where each command can be given verbally, selected from a menu with a mouse, selected via a keyboard shortcut, and so on. That will work better if your goal is to properly allow any input device to be used with any interface (then just make the input device driver decide how best to translate the list of available commands to the inputs received from the device, such as whether it should provide key-based navigation or take the command directly as a spoken text string or whatever).
Brendan wrote:
The problem with trying to prove that something doesn't exist (e.g. a significant difference between an interface for people that can't see and an interface for people that can't see) is that you can't point to anything and say "see, it didn't exist". Proving that something does exist is trivial because you can find anything that does exist.

You think a difference exists but you can't find one (even though it should be trivial), so instead you're expecting me to prove something doesn't exist (which is impossible).
I already have found the difference, and explained it to you. The difference is in the situations where the two groups of users will be using the audio interface, and the resulting difference between the way that they will expect the audio interface to work and the kinds of tasks that they will try to perform with the audio interface.
Brendan wrote:
onlyonemac wrote:
On topic, I can't see why you're so dumb as to be unable to see the obvious difference between a mobile use case and a desktop use case. If we suppose that actually you can see the difference (which I seriously doubt), It can only mean one of two things, neither of which are acceptable:
  • You're expecting blind people to use only mobile devices, not desktop devices
  • You're not distinguishing between mobile devices and desktop devices, and more importantly the interfaces that they present to the user, and you can see how well that worked when Microsoft tried it
Either way, you're being dumb.


For someone using speech recognition and video output; is that "mobile" or "desktop"? What if the video output is Google Glass?

For someone using keyboard and audio output; is that "mobile" or "desktop"? What if the input device is motion tracking or wired glove/s and the keyboard is a virtual keyboard that doesn't physically exist; and the user is wearing SCUBA gear or a parachute?

For someone using touchscreen and video output; is that "mobile" or "desktop"? Is it a smartphone user sitting at a desk? Is it someone sitting in a taxi using a laptop?

For someone using 2 joysticks and video output; is that "mobile" or "desktop"? Is it someone playing a 3D game on an aeroplane, or someone operating an excavator from their desk?
  • Speech recognition and video output: usually mobile, but depends on the situation. I have seen people using speech input at a desktop computer, usually because they can't type for whatever reason, but if the person is using a portable device with a small display and they can't perform advanced tasks because they trying to do something else at the same time (like walk down the side of a road) then it's mobile.
  • Keyboard input and audio output: desktop. A blind person (and an insane sighted person) will use that at a desktop computer, but nobody else would use that because if they're at a desktop they'll have a keyboard and monitor and if they're on a mobile device they won't have a keyboard. I don't envisage "virtual keyboards" becoming so popular that people will forgo the use of a graphical interface so that they can continue typing their document while they're scuba-diving, and if "virtual keyboards" ever do become that popular I'd guess that "holographic projection displays" become popular too and people will use them together and have a graphical interface. So for now, that's desktop.
  • Touchscreen and video output: usually mobile, but depends on the situation. It doesn't matter where the user is sitting, if they're using a mobile device such as a smartphone or tablet then it's mobile. Desktop touchscreens didn't particularly catch on, but they're desktop nevertheless. Someone sitting in a taxi with a laptop depends on how they're using the laptop: if they've brought a laptop instead of a tablet because they want to be able to type on a proper keyboard, then it counts as desktop, but if they're using it as a touchscreen device then that's mobile because they're using it in place of a tablet.
  • Two joysticks and video output: desktop. Using a joystick on a mobile device would be very difficult; even if they've set their joysticks up on a table on an aeroplane, it's still desktop (just like it would be if they were typing on a laptop).
In short, keyboards, mice, and monitors have proven themselves to be the most efficient devices for serious work, and that's why most commuters these days who work while travelling still prefer to take a laptop rather than a tablet, and people still like to have a desktop computer to work at. Forget about trying to change that, there's nothing wrong with it.

_________________
When you start writing an OS you do the minimum possible to get the x86 processor in a usable state, then you try to get as far away from it as possible.

Syntax checkup:
Wrong: OS's, IRQ's, zero'ing
Right: OSes, IRQs, zeroing


Top
 Profile  
 
 Post subject: Re: Dodgy EDIDs (was: What does your OS look like?)
PostPosted: Thu Feb 18, 2016 5:44 am 
Offline
Member
Member

Joined: Mon Jan 03, 2011 6:58 pm
Posts: 283
onlyonemac wrote:
Brendan wrote:
... which may or may not involve multiple modes (e.g. a command mode where "up" means the up event and a literal mode where "up" means the literal characters 'u' then 'p') ...
So you think that you'll be able to make a voice-driven audio interface that's good enough that sighted people will want to use it wherever they are to perform tasks that they normally perform at a desktop computer, but yet you're making them navigate menus with voice commands like "up", "down", "select", and so on? ...


That statement right there is enough to prove that you are being intentionally obtuse/dense, or aren't smart enough to have any meaningful input in this field. (Or I guess, don't understand english well enough)

If you can't understand someone making a random example (of why there might be different "modes" of an input device) you should just stop now.

- Monk


Top
 Profile  
 
 Post subject: Re: Dodgy EDIDs (was: What does your OS look like?)
PostPosted: Thu Feb 18, 2016 7:10 am 
Offline
Member
Member

Joined: Sat Mar 01, 2014 2:59 pm
Posts: 1146
tjmonk15 wrote:
If you can't understand someone making a random example (of why there might be different "modes" of an input device) you should just stop now.
His modes seem to be either a "navigation" mode or a "typing" mode. I'm suggesting a "command" mode to replace/complement the "navigation" mode; "typing" mode won't work as such unless you're using a command-line interface and "navigation" mode will be very inefficient for some input methods.

_________________
When you start writing an OS you do the minimum possible to get the x86 processor in a usable state, then you try to get as far away from it as possible.

Syntax checkup:
Wrong: OS's, IRQ's, zero'ing
Right: OSes, IRQs, zeroing


Top
 Profile  
 
 Post subject: Re: Dodgy EDIDs (was: What does your OS look like?)
PostPosted: Thu Feb 18, 2016 2:19 pm 
Offline
Member
Member
User avatar

Joined: Sat Jan 15, 2005 12:00 am
Posts: 8561
Location: At his keyboard!
Hi,

onlyonemac wrote:
Brendan wrote:
I'm not trying to squeeze keyboard and voice input into the same set of events; I'm trying to squeeze 40 different keyboard layouts, speech recognition for over 50 different spoken languages, 4 different types of mouse, a few different types of trackball, multiple types of joysticks (including steering wheels, etc), touchpads and touchscreens, light guns, wired gloves, motion/eye/head tracking, "Sip and puff", and anything that might exist in the future into the same set of events.

There are only a relatively small number of event types (probably less than 100) - absolute coords, relative coords, literal characters, navigation ("up, down, left, right" for navigating trees/grids, setting/jumping to anchors) and some special commands (cancel/escape, menu, select, cut, copy, paste, etc). Device drivers map input to these events in whatever way makes sense for the device; which may or may not involve multiple modes (e.g. a command mode where "up" means the up event and a literal mode where "up" means the literal characters 'u' then 'p'), but may or may not just mean absolute or relative coords (and a virtual keyboard) or whatever.
So you think that you'll be able to make a voice-driven audio interface that's good enough that sighted people will want to use it wherever they are to perform tasks that they normally perform at a desktop computer, but yet you're making them navigate menus with voice commands like "up", "down", "select", and so on? If anything, the one requirement for a voice-driven interface (whatever output form is used) is for it to accept actual commands (correlating to, for example, the commands that can be selected from menus if one is using a keyboard-driven or mouse-driven interface) otherwise it will be very clumsy to use. So here's a big redesign for your OS: instead of making the events "navigation" events from an input device, make them "command" events where each command can be given verbally, selected from a menu with a mouse, selected via a keyboard shortcut, and so on. That will work better if your goal is to properly allow any input device to be used with any interface (then just make the input device driver decide how best to translate the list of available commands to the inputs received from the device, such as whether it should provide key-based navigation or take the command directly as a spoken text string or whatever).


You start an app for the first time, you have no idea what is in the menus, how do you discover/explore? You need to be able to use commands like "up" (to parent), "left" (to previous child), "right" (to next child), "down" (select) to know what is in the menus. The same when you open a document for the first time and have no idea what the chapters, section headings, etc are; or when you open some source code someone else wrote; or...

If you already know what you're searching for, maybe say the "search" command followed by a key word (and have it sent as a "search event with literal characters" to the front end), and maybe any unrecognised word in command mode causes that word to be sent as "search event with literal characters" to the front end so that you don't need to say "search" (unless you want something like "search up" and can't just say "up" because its a command).

Note that for all things, I typically spend ages (1 month to 6 months) researching and designing it, then write a formal "draft" specification describing it, then implement it and refine the specification. Things I say before I've even begun the research phase are subject to change.

However, for a practical system, I must map all the different input devices onto a common set of events. This is inescapable and unchangeable; because (when the user is able to use one or more input devices at the same time) "input device/s" is thousands of possible combinations. "Each device driver converts input to events" is practical. "Each front-end deals with thousands of combinations of input device/s" is not. There is no sane alternative. The only thing that can change (due to research, etc) is the number and types of events.

onlyonemac wrote:
Brendan wrote:
The problem with trying to prove that something doesn't exist (e.g. a significant difference between an interface for people that can't see and an interface for people that can't see) is that you can't point to anything and say "see, it didn't exist". Proving that something does exist is trivial because you can find anything that does exist.

You think a difference exists but you can't find one (even though it should be trivial), so instead you're expecting me to prove something doesn't exist (which is impossible).
I already have found the difference, and explained it to you. The difference is in the situations where the two groups of users will be using the audio interface, and the resulting difference between the way that they will expect the audio interface to work and the kinds of tasks that they will try to perform with the audio interface.


Yes; you have shown (and I do agree sort of) that there are 2 groups - (blind and sighted) casual users in one group, and (blind and sighted) advanced/regular users in the other group. I assume we both know that this isn't 2 distinct groups and is more like a sliding scale (and that it has nothing to do with "sighted" vs. "blind", and is something that applies to all apps using any combination of input device/s on any OS).

onlyonemac wrote:
Brendan wrote:
For someone using speech recognition and video output; is that "mobile" or "desktop"? What if the video output is Google Glass?

For someone using keyboard and audio output; is that "mobile" or "desktop"? What if the input device is motion tracking or wired glove/s and the keyboard is a virtual keyboard that doesn't physically exist; and the user is wearing SCUBA gear or a parachute?

For someone using touchscreen and video output; is that "mobile" or "desktop"? Is it a smartphone user sitting at a desk? Is it someone sitting in a taxi using a laptop?

For someone using 2 joysticks and video output; is that "mobile" or "desktop"? Is it someone playing a 3D game on an aeroplane, or someone operating an excavator from their desk?
  • Speech recognition and video output: usually mobile, but depends on the situation. I have seen people using speech input at a desktop computer, usually because they can't type for whatever reason, but if the person is using a portable device with a small display and they can't perform advanced tasks because they trying to do something else at the same time (like walk down the side of a road) then it's mobile.
  • Keyboard input and audio output: desktop. A blind person (and an insane sighted person) will use that at a desktop computer, but nobody else would use that because if they're at a desktop they'll have a keyboard and monitor and if they're on a mobile device they won't have a keyboard. I don't envisage "virtual keyboards" becoming so popular that people will forgo the use of a graphical interface so that they can continue typing their document while they're scuba-diving, and if "virtual keyboards" ever do become that popular I'd guess that "holographic projection displays" become popular too and people will use them together and have a graphical interface. So for now, that's desktop.
  • Touchscreen and video output: usually mobile, but depends on the situation. It doesn't matter where the user is sitting, if they're using a mobile device such as a smartphone or tablet then it's mobile. Desktop touchscreens didn't particularly catch on, but they're desktop nevertheless. Someone sitting in a taxi with a laptop depends on how they're using the laptop: if they've brought a laptop instead of a tablet because they want to be able to type on a proper keyboard, then it counts as desktop, but if they're using it as a touchscreen device then that's mobile because they're using it in place of a tablet.
  • Two joysticks and video output: desktop. Using a joystick on a mobile device would be very difficult; even if they've set their joysticks up on a table on an aeroplane, it's still desktop (just like it would be if they were typing on a laptop).
In short, keyboards, mice, and monitors have proven themselves to be the most efficient devices for serious work, and that's why most commuters these days who work while travelling still prefer to take a laptop rather than a tablet, and people still like to have a desktop computer to work at. Forget about trying to change that, there's nothing wrong with it.


Essentially; you have an inflexible idea of how people could use computers and assume every user fits into one of a small number narrowly defined baskets. It's thinking like this that forces users to try to fit into one of the narrowly defined baskets and prevents them from being able to use computers how they want to.


Cheers,

Brendan

_________________
For all things; perfection is, and will always remain, impossible to achieve in practice. However; by striving for perfection we create things that are as perfect as practically possible. Let the pursuit of perfection be our guide.


Top
 Profile  
 
 Post subject: Re: Dodgy EDIDs (was: What does your OS look like?)
PostPosted: Fri Feb 19, 2016 4:54 am 
Offline
Member
Member

Joined: Sat Mar 01, 2014 2:59 pm
Posts: 1146
Brendan wrote:
However, for a practical system, I must map all the different input devices onto a common set of events. This is inescapable and unchangeable; because (when the user is able to use one or more input devices at the same time) "input device/s" is thousands of possible combinations. "Each device driver converts input to events" is practical. "Each front-end deals with thousands of combinations of input device/s" is not. There is no sane alternative. The only thing that can change (due to research, etc) is the number and types of events.
Herein lies your flaw: you can't map so many diverse input devices to the same set of events (and therefore interface) without having to greatly restrict the flexibility of your events (and therefore interface). (The same applies with output devices.) Interfaces are efficient only when they are tailored for the specific set of input and output devices that they are intended to be used with, and trying to use them with anything else is a compromise. If you're trying to make your interface work with all kinds of input devices ranging from mice to microphones to head wands to virtual-reality-style gloves, your interface is going to be less-than-efficient for all users.
Brendan wrote:
Yes; you have shown (and I do agree sort of) that there are 2 groups - (blind and sighted) casual users in one group, and (blind and sighted) advanced/regular users in the other group. I assume we both know that this isn't 2 distinct groups and is more like a sliding scale (and that it has nothing to do with "sighted" vs. "blind", and is something that applies to all apps using any combination of input device/s on any OS).
No you have not been paying attention. I have shown that there are 2 groups:
  • blind and sighted casual users [of audio interfaces]
  • blind advanced/regular users [of audio interfaces]
Notice that I did *not* mentioned sighted advanced/regular users [of audio interfaces]. That's because sighted users are never going to use an audio interface regularly, because regular computer use traditionally (and always will, considering that this is the most efficient working environment) gets done at a desk with a desktop or a laptop computer, and sighted users aren't going to use an audio interface on those devices. So while an advanced/regular sighted audio interface user is technically no different from a blind one, the distinction is that advanced/regular sighted audio interface users don't exist and never will. Thus the only advanced/regular audio interface users are blind ones, which are a small enough minority that application developers are going to ignore them and design audio interfaces intended only for casual use, thus limiting the functionality available to blind users.
Brendan wrote:
It's thinking like this that forces users to try to fit into one of the narrowly defined baskets and prevents them from being able to use computers how they want to.
Umm, no. The "narrowly-defined baskets" only developed *because* these are the ways that users want to use computers, and the ways that have proven themselves to be the most effective. You, on the other hand, are trying to find and implement functionality for new ways of using computers that don't yet exist, and which we don't know if they ever will exist, and in doing so are reducing the efficiency of the ways in which people use computers currently and will want to continue to use them in the future.

_________________
When you start writing an OS you do the minimum possible to get the x86 processor in a usable state, then you try to get as far away from it as possible.

Syntax checkup:
Wrong: OS's, IRQ's, zero'ing
Right: OSes, IRQs, zeroing


Top
 Profile  
 
 Post subject: Re: Dodgy EDIDs (was: What does your OS look like?)
PostPosted: Fri Feb 19, 2016 1:29 pm 
Offline
Member
Member
User avatar

Joined: Wed Oct 27, 2010 4:53 pm
Posts: 1150
Location: Scotland
Desks and chairs are unhealthy places to work: the chair helps to kill you, and even if you have a standing desk, that will harm you unless you're also walking on a treadmill. People need to be freed up to work away from desks, and whenever they can work efficiently without a screen they will be happy to do so, and even if it takes a bit longer than normal, the health gains and the ability to work while walking in the park, hiking up mountains or cycling round the world will make it all worthwhile. This will drive things in the direction of better software for blind people in the process, but it may not drive things quite as far as blind people would like it to because there might be some tasks which are possible to do through audio alone but which are so hard to do that way that any sighted person will give up and use a screen to save time. It would be useful to have a specific example to work with, and then it would be worth looking to see if Brendan's operating system is really going to get in the way of an audio interface being capable of handling the task to the satisfaction of a blind user who is keen to be able to do work which is hard but not impossible through an audio interface. I can't see any reason in principle why his operating system should cause any problems in that regard, but perhaps a clear example would show something up.

_________________
Help the people of Laos by liking - https://www.facebook.com/TheSBInitiative/?ref=py_c

MSB-OS: http://www.magicschoolbook.com/computing/os-project - direct machine code programming


Top
 Profile  
 
 Post subject: Re: Dodgy EDIDs (was: What does your OS look like?)
PostPosted: Fri Feb 19, 2016 1:59 pm 
Offline
Member
Member
User avatar

Joined: Sat Jan 15, 2005 12:00 am
Posts: 8561
Location: At his keyboard!
Hi,

onlyonemac wrote:
Brendan wrote:
However, for a practical system, I must map all the different input devices onto a common set of events. This is inescapable and unchangeable; because (when the user is able to use one or more input devices at the same time) "input device/s" is thousands of possible combinations. "Each device driver converts input to events" is practical. "Each front-end deals with thousands of combinations of input device/s" is not. There is no sane alternative. The only thing that can change (due to research, etc) is the number and types of events.
Herein lies your flaw: you can't map so many diverse input devices to the same set of events (and therefore interface) without having to greatly restrict the flexibility of your events (and therefore interface). (The same applies with output devices.) Interfaces are efficient only when they are tailored for the specific set of input and output devices that they are intended to be used with, and trying to use them with anything else is a compromise. If you're trying to make your interface work with all kinds of input devices ranging from mice to microphones to head wands to virtual-reality-style gloves, your interface is going to be less-than-efficient for all users.


A) I have no reason to suspect that I can't make all the possible combinations relatively efficient

B) I don't have any choice. Expecting every single applications to support (literally) thousands of combinations by themselves is completely insane.

C) I very much doubt that someone using input device/s that one or more applications don't support will think this is more efficient.

D) The OS handles some events (e.g. "control+alt+delete"), the GUI handles some events (e.g. "alt+tab"), the application's front-end handles some events, widget services handle some events. Sometimes an application will be running inside another application (debugger output window in an IDE, desktop publishing/word-processor where you insert a picture into a document and then edit the picture while its in the document, etc). Often the user is switching between multiple applications. All of these pieces; all written by different people; need to be compatible. I need a "globally consistent" user interface. I can't have "same input does radically different things in different places because each GUI, application and widget service had to implement everything themselves".

onlyonemac wrote:
Brendan wrote:
Yes; you have shown (and I do agree sort of) that there are 2 groups - (blind and sighted) casual users in one group, and (blind and sighted) advanced/regular users in the other group. I assume we both know that this isn't 2 distinct groups and is more like a sliding scale (and that it has nothing to do with "sighted" vs. "blind", and is something that applies to all apps using any combination of input device/s on any OS).
No you have not been paying attention. I have shown that there are 2 groups:
  • blind and sighted casual users [of audio interfaces]
  • blind advanced/regular users [of audio interfaces]
Notice that I did *not* mentioned sighted advanced/regular users [of audio interfaces].


Yes; I noticed you're still wrong and are still trying to pretend that sighted users can't be advanced/regular users of audio interfaces for no sane reason whatsoever (other than the "existing OSs are a worthless joke and make things like audio interfaces suck for everyone, forcing people who have a choice to avoid it" problem).

onlyonemac wrote:
That's because sighted users are never going to use an audio interface regularly, because regular computer use traditionally (and always will, considering that this is the most efficient working environment) gets done at a desk with a desktop or a laptop computer, and sighted users aren't going to use an audio interface on those devices. So while an advanced/regular sighted audio interface user is technically no different from a blind one, the distinction is that advanced/regular sighted audio interface users don't exist and never will. Thus the only advanced/regular audio interface users are blind ones, which are a small enough minority that application developers are going to ignore them and design audio interfaces intended only for casual use, thus limiting the functionality available to blind users.


Do you have any proof that sighted users won't regularly use things like Siri? Do you have any proof that sighted users won't regularly use audio interfaces for things like word-processing, spread sheets, etc when they can't use video; if those audio interfaces aren't pathetic pieces of crap (e.g. screen reader slapped on top of something designed for video) that do everything possible to scare users away (including not being part of the OS and costing $$, and including "for blind people only" marketing)?

Note: If you tell people that certain features are for old people, young children or disabled people; then most people will avoid it because they don't want people to think they're old and/or a child and/or disabled, and they don't want to think of themselves this way either. If you say "designed for people doing activities like working out at the gym, travelling, parachuting, scuba diving and motorbike racing" then there's far less social stigma involved. For example; I'd be willing to bet that if you found 10 teenagers that regularly use Siri and convinced them it's designed for blind people, 9 out of 10 would stop using Siri immediately. In the same way, if you suggested to a friend that they could use a talking clock (designed for blind people) if they don't like opening their eyes to find out the time, then they'll probably be offended ("OMG, do you think I'm disabled!?!?") even if it would solve their problem and make their life easier.

onlyonemac wrote:
Brendan wrote:
It's thinking like this that forces users to try to fit into one of the narrowly defined baskets and prevents them from being able to use computers how they want to.
Umm, no. The "narrowly-defined baskets" only developed *because* these are the ways that users want to use computers, and the ways that have proven themselves to be the most effective. You, on the other hand, are trying to find and implement functionality for new ways of using computers that don't yet exist, and which we don't know if they ever will exist, and in doing so are reducing the efficiency of the ways in which people use computers currently and will want to continue to use them in the future.


Yes; I'm trying to avoid limiting the ways that computers can be used; partly because there are a lot of people that can benefit now, and partly because I'm trying to defend against the future.

Note: So far (in my lifetime) I've seen computers go from expensive/rare things (only owned/used by large companies) to something homeless people have; I've seen large/heavy "portable" systems (with built in CRT) that very few people wanted transform into light/thin devices that are everywhere; I've seen "inter-networking" go from dial-up BBS to ubiquitous high speed Internet; I've seen "hard-wired phone with rotary dial" become smartphones; I've seen audio go from "beeps" to high quality surround sound; I've seen the introduction of things like emojis and gestures; I've seen multiple technologies (touch screens/touch pads, VR and augmented reality, motion tracking in multiple forms) go from "area of research" all the way to commercial products. If you think that what we have today will always remain the same, then... The only thing that is certain is that the way people use computers will continue to change.

Over then next 10 to 15 years; I'm expecting the existing shift towards mobile devices to continue (and the decline of "traditional desktop" to continue), and therefore also expecting input devices to change as a consequence (and things like keyboard and mouse to become less common). I'm also expecting 3D display technology will become common (including both autostereoscopic monitors and VR helmets). Maybe we'll all end up wearing something like Microsoft's HoloLens and we'll be interacting with applications like they're physical objects (tap on your girlfriend's butt and select "New spreadsheet" and calculate this week's budget on her right cheek).

I don't know; but I'm damn sure I'd don't want my OS to be trapped in 1990s like existing OSs are today.


Cheers,

Brendan

_________________
For all things; perfection is, and will always remain, impossible to achieve in practice. However; by striving for perfection we create things that are as perfect as practically possible. Let the pursuit of perfection be our guide.


Top
 Profile  
 
 Post subject: Re: Dodgy EDIDs (was: What does your OS look like?)
PostPosted: Fri Feb 19, 2016 3:05 pm 
Offline
Member
Member

Joined: Sat Mar 01, 2014 2:59 pm
Posts: 1146
Brendan wrote:
I very much doubt that someone using input device/s that one or more applications don't support will think this is more efficient.
But in the process of attempting to support those input devices (which, I gather, are a minority) you are compromising the efficiency of mainstream input devices. As much as I support the inclusion of minorities in software development, attempting to unify a design to the point of compromising the majority for the sake of including extreme minorities is unjustifiable; a better solution would be to optimise your event set for the majority, rather than restricting it to the "lowest common denominator" of all minorities, and to compromise on support for the most restrictive minorities (which is, in fact, an improvement of the other alternative of not supporting them at all).
Brendan wrote:
Yes; I noticed you're still wrong and are still trying to pretend that sighted users can't be advanced/regular users of audio interfaces for no sane reason whatsoever (other than the "existing OSs are a worthless joke and make things like audio interfaces suck for everyone, forcing people who have a choice to avoid it" problem).
You mean to say, "I noticed that I'm still wrong and am still trying to pretend that sighted users will be advanced/regular users of audio interfaces with no justification whatsoever". Trying to pretend that sighted users will choose an audio interface over a video alternative, no matter how good the audio interface is, is an utter joke because to most sighted users a video interface will always be better than an audio interface. (Notice that I actually justified my statement.)
Brendan wrote:
Do you have any proof that sighted users won't regularly use audio interfaces for things like word-processing, spread sheets, etc when they can't use video
Audio interfaces can't (easily, efficiently, and effectively) convey formatting information (sure, they can read out a description of the formatting or use an audio icon, but note that I said "easily, efficiently, and effectively") so serious word processing is out of the question (note that I said "serious" word processing; casual word processing, like reviewing and editing document content, is still a valid use case). Audio interfaces are also linear, meaning that all the information flows in a single-dimensional stream, so trying to work with graphical content, like a spreadsheet, is difficult and no matter how good your audio interface is it will still be difficult because this is a limitation in the way that humans perceive sound.
Brendan wrote:
Yes; I'm trying to avoid limiting the ways that computers can be used; partly because there are a lot of people that can benefit now, and partly because I'm trying to defend against the future.
In trying to defend against the future, you are limiting the ways in which computers can be used currently.
Brendan wrote:
Note: So far (in my lifetime) I've seen computers go from expensive/rare things (only owned/used by large companies) to something homeless people have; I've seen large/heavy "portable" systems (with built in CRT) that very few people wanted transform into light/thin devices that are everywhere; I've seen "inter-networking" go from dial-up BBS to ubiquitous high speed Internet; I've seen "hard-wired phone with rotary dial" become smartphones; I've seen audio go from "beeps" to high quality surround sound; I've seen the introduction of things like emojis and gestures; I've seen multiple technologies (touch screens/touch pads, VR and augmented reality, motion tracking in multiple forms) go from "area of research" all the way to commercial products. If you think that what we have today will always remain the same, then... The only thing that is certain is that the way people use computers will continue to change.
And what do all of these have in common? They developed in response to the users themselves. This is a consumer-driven industry. Users wanted to be able to take their computers with them, so the companies developed laptops. Users started using punctuation characters to represent expression in text-based communication, so developers started adding emoji to mobile keyboards and communication apps. Users wanted to take stable video with their smartphones, so researchers integrated motion tracking into smartphone camera apps. Every successful development in the computing industry has been a direct result of something that the users wanted.

On the other hand, we've got idiots like you who try to push development forward against the way that users want it to go. That's what Microsoft have done. They changed the menubar interface to a "ribbon" and now everyone I've ever met gripes at how inefficient this interface is yet Microsoft refuse to change it back. They made a desktop operating system designed for a touchscreen, hoping that users would get touchscreens on their desktops, but users found that desktop touchscreens are uncomfortable to use and complained about the interface change until Microsoft (sort of) changed it back. Then they made an operating system that integrates everything with a Microsoft account, hoping that users would adopt cloud-based products, and users worry about their privacy and make sure to disable every cloud integration feature in Windows. These developments failed, because they're not responding to what users want but rather expecting that users will want a particular thing then forcing that thing on them until either they begrudgingly give in or the developers lose money and have to change it back.

If you want to future-proof your OS, don't try to support every hypothetical input device, output device, and interface; make it extendible and expandable so that you can respond to changes in the industry and keep users satisfied without compromising on the things that they currently use every day. And application developers will follow suit: they won't need to implement support for thousands of different input devices; they will only need to implement support for those that are available at the time of development, and if a new technology comes out and they want to support it then they can add support then. It doesn't matter what you're expecting will happen; it's what actually happens that matters, and preparing your OS to respond to those developments is the most sensible thing that you could do right now.

_________________
When you start writing an OS you do the minimum possible to get the x86 processor in a usable state, then you try to get as far away from it as possible.

Syntax checkup:
Wrong: OS's, IRQ's, zero'ing
Right: OSes, IRQs, zeroing


Top
 Profile  
 
 Post subject: Re: Dodgy EDIDs (was: What does your OS look like?)
PostPosted: Fri Feb 19, 2016 3:12 pm 
Offline
Member
Member

Joined: Sat Mar 01, 2014 2:59 pm
Posts: 1146
DavidCooper wrote:
This will drive things in the direction of better software for blind people in the process, but it may not drive things quite as far as blind people would like it to because there might be some tasks which are possible to do through audio alone but which are so hard to do that way that any sighted person will give up and use a screen to save time. It would be useful to have a specific example to work with, and then it would be worth looking to see if Brendan's operating system is really going to get in the way of an audio interface being capable of handling the task to the satisfaction of a blind user who is keen to be able to do work which is hard but not impossible through an audio interface. I can't see any reason in principle why his operating system should cause any problems in that regard, but perhaps a clear example would show something up.
Something like office software could be a problem: formatting text, moving elements around on the page, and so on are all tasks which are difficult to do with an audio interface but which if blind people were unable to do they would be very frustrated at being unable to produce documents that look as professional as those produced by a sighted person. My screenreader will read out text formatting (such as the font name, size, colour, whether the text is bold/itallic/underlined, and whatever else I've configured it to read out) and when I'm producing a presentation in PowerPoint it reads out the size and position of the elements on the slide as I move them around (and I think there's also a key combination to read them out on demand but I never use that feature) so I can estimate where something is on the slide and get an idea of how the presentation will look. Sighted users aren't going to want to fuss with listening to descriptions of formatting and try to guess where the elements on the slides are, so chances are audio interface developers are going to leave those features out of the audio interface, thus putting blind users at a disadvantage. That's why, even if a separate audio interface is provided, having built-in support for something like a screenreader designed to work with the graphical interface when the audio interface is too restrictive moves the responsibility off of the application developers (who have repeatedly proven themselves to not care about software accessibility) to the screenreader developers (who's main focus is software accessibility).

_________________
When you start writing an OS you do the minimum possible to get the x86 processor in a usable state, then you try to get as far away from it as possible.

Syntax checkup:
Wrong: OS's, IRQ's, zero'ing
Right: OSes, IRQs, zeroing


Top
 Profile  
 
 Post subject: Re: Dodgy EDIDs (was: What does your OS look like?)
PostPosted: Sat Feb 20, 2016 12:58 am 
Offline
Member
Member
User avatar

Joined: Sat Jan 15, 2005 12:00 am
Posts: 8561
Location: At his keyboard!
Hi,

onlyonemac wrote:
Brendan wrote:
I very much doubt that someone using input device/s that one or more applications don't support will think this is more efficient.
But in the process of attempting to support those input devices (which, I gather, are a minority) you are compromising the efficiency of mainstream input devices.


No, I'm not. What makes you think that?

Do you think a mouse requires a radically different user interface to a trackball? Do you think that speech recognition requires a radically different user interface to a [url=https://en.wikipedia.org/wiki/Handwriting_recognition]handwriting recognition[//url]/OCR system? Do you think it's impossible to create a super-set of events, by simply combining the set of events that each type of input device provides? Do you think it's impossible to simplify that super-set by merging "multiple almost identical anyway" events into one event? Do you think device drivers can't emulate some/most events that were more suited to a different type of input device (e.g. use keypad to emulate mouse, use voice commands to provide up/down/left right, use a virtual keyboard to emulate key presses with mouse or touch screen, use eye-tracking to emulate mouse, etc)?

Do you think when a front end receives a "literal character A" event it cares whether the event came from a real keyboard, or a virtual keyboard, or speech synthesiser, or something like a web camera that recognises sign language? Do you think when a front end receives a "cursor up" event it cares where that event came from? How about a "find" event, or a "switch to menus" event?

onlyonemac wrote:
Brendan wrote:
Yes; I noticed you're still wrong and are still trying to pretend that sighted users can't be advanced/regular users of audio interfaces for no sane reason whatsoever (other than the "existing OSs are a worthless joke and make things like audio interfaces suck for everyone, forcing people who have a choice to avoid it" problem).
You mean to say, "I noticed that I'm still wrong and am still trying to pretend that sighted users will be advanced/regular users of audio interfaces with no justification whatsoever". Trying to pretend that sighted users will choose an audio interface over a video alternative, no matter how good the audio interface is, is an utter joke because to most sighted users a video interface will always be better than an audio interface. (Notice that I actually justified my statement.)


Sigh. Sure, if a sighted user is sitting in front of a nice large monitor they'll use that monitor. They won't install a nice large monitor into their car's windscreen so they can use it while driving. They won't glue a nice large monitor on the front of a bicycle helmet. They won't push a trolley in front of them while walking their dogs. Gardeners, construction workers, factory workers, garbage collection people, machinists, welders, shearers, painters, cleaners, butchers, bakers, candle-stick makers - all of these (and people doing many other types of work) can't and won't use video interfaces. There is no reason that all of these people (which is literally billions of people worldwide) are prevented from using audio interfaces regularly while they can't use video interfaces, either for their own purposes or as part of their job.

Well, almost no reason.

There are 2 reasons. That first reason is "people that think screen readers are good enough", and the second reason is "people think billions of people don't exist". These people are people like Microsoft and W3C, and every other company/group/organisation who have consistently failed to care about anything other than "keyboard, mouse and video" when designing anything and then retro-fitted an afterthought later; who are responsible for creating the festering puss you've become accustomed to and are now assuming is the "best" way to do things. Note that it's not really because these people were stupid; it's just the way technology evolved.

onlyonemac wrote:
Brendan wrote:
Do you have any proof that sighted users won't regularly use audio interfaces for things like word-processing, spread sheets, etc when they can't use video
Audio interfaces can't (easily, efficiently, and effectively) convey formatting information (sure, they can read out a description of the formatting or use an audio icon, but note that I said "easily, efficiently, and effectively") so serious word processing is out of the question (note that I said "serious" word processing; casual word processing, like reviewing and editing document content, is still a valid use case).


Wrong. Quitting your job and/or abandoning your car so that you can use a video interface is not more efficient than just using audio to determine and/or change things like font styles while writing a novel or something where you don't care about font styles 99% of the time and only need the ability rarely.

Yes; there are some things (e.g. bitmap picture editors, high quality desktop publishing) where audio alone isn't going to work, and there are some things (e.g. remixing and remastering songs) where video alone can't work, and there's probably cases where neither alone can work (e.g. trying to get sounds in sync with video when making a movie) where you need both. There is nothing anyone can do about these cases.

onlyonemac wrote:
Audio interfaces are also linear, meaning that all the information flows in a single-dimensional stream, so trying to work with graphical content, like a spreadsheet, is difficult and no matter how good your audio interface is it will still be difficult because this is a limitation in the way that humans perceive sound.


Spreadsheets are 4 dimensional (imagine sheets along a W axis, columns along an X axis, rows along a Y axis, and the content of each cell on a Z axis). They are flattened to 2D for the video (typically by making the user switch between planes to avoid a W axis, and by combining the Z axis with the X axis). For an audio interface you could just have 3D controls (for W, X and Y) and use time for Z (but have the ability to move forward/backward through time by words, sentences, sub-expressions, etc).

For 3D video (e.g. stereoscopy), there's probably interesting ways to use depth for one of the dimensions (e.g. display each sheet as a transparent layer at a different focal point).

Note that nobody (that I know of) is able to read multiple sentences simultaneously; and as they read their eyes and brain turn it into a linear sequence. Even for "3D video" you'd select a cell (e.g. focus at that depth, look at the cell) and read the cell's contents as a linear sequence.

onlyonemac wrote:
Brendan wrote:
Yes; I'm trying to avoid limiting the ways that computers can be used; partly because there are a lot of people that can benefit now, and partly because I'm trying to defend against the future.
In trying to defend against the future, you are limiting the ways in which computers can be used currently.


Wrong. The ways in which computers can be used currently is limited by the past. By doing something less limited I defend against the future while also improving the present.

onlyonemac wrote:
Brendan wrote:
Note: So far (in my lifetime) I've seen computers go from expensive/rare things (only owned/used by large companies) to something homeless people have; I've seen large/heavy "portable" systems (with built in CRT) that very few people wanted transform into light/thin devices that are everywhere; I've seen "inter-networking" go from dial-up BBS to ubiquitous high speed Internet; I've seen "hard-wired phone with rotary dial" become smartphones; I've seen audio go from "beeps" to high quality surround sound; I've seen the introduction of things like emojis and gestures; I've seen multiple technologies (touch screens/touch pads, VR and augmented reality, motion tracking in multiple forms) go from "area of research" all the way to commercial products. If you think that what we have today will always remain the same, then... The only thing that is certain is that the way people use computers will continue to change.
And what do all of these have in common? They developed in response to the users themselves. This is a consumer-driven industry. Users wanted to be able to take their computers with them, so the companies developed laptops. Users started using punctuation characters to represent expression in text-based communication, so developers started adding emoji to mobile keyboards and communication apps. Users wanted to take stable video with their smartphones, so researchers integrated motion tracking into smartphone camera apps. Every successful development in the computing industry has been a direct result of something that the users wanted.


Every successful development in the computing industry has been a direct result of technology, then economics and marketing. When technology makes something possible, it ends up being a contest between economics (price) and marketing (convincing people they want to pay that price). Users have no idea what they want until they see it (until marketing shoves it in their face and tells them to want it).

onlyonemac wrote:
On the other hand, we've got idiots like you who try to push development forward against the way that users want it to go. That's what Microsoft have done. They changed the menubar interface to a "ribbon" and now everyone I've ever met gripes at how inefficient this interface is yet Microsoft refuse to change it back. They made a desktop operating system designed for a touchscreen, hoping that users would get touchscreens on their desktops, but users found that desktop touchscreens are uncomfortable to use and complained about the interface change until Microsoft (sort of) changed it back. Then they made an operating system that integrates everything with a Microsoft account, hoping that users would adopt cloud-based products, and users worry about their privacy and make sure to disable every cloud integration feature in Windows. These developments failed, because they're not responding to what users want but rather expecting that users will want a particular thing then forcing that thing on them until either they begrudgingly give in or the developers lose money and have to change it back.


For the ribbon interface; I agree - it was a clear mistake. For the rest, I don't think it was a mistake at all - I think it was a calculated risk. Essentially, I think Microsoft knew all along that desktop users wouldn't like it, but wanted to force application developers to support their smartphones and wanted to force users to adopt their cloud solutions and wanted to force people into using subscription based schemes; and simply didn't care that desktop users wouldn't like it.

onlyonemac wrote:
If you want to future-proof your OS, don't try to support every hypothetical input device, output device, and interface; make it extendible and expandable so that you can respond to changes in the industry and keep users satisfied without compromising on the things that they currently use every day. And application developers will follow suit: they won't need to implement support for thousands of different input devices; they will only need to implement support for those that are available at the time of development, and if a new technology comes out and they want to support it then they can add support then. It doesn't matter what you're expecting will happen; it's what actually happens that matters, and preparing your OS to respond to those developments is the most sensible thing that you could do right now.


This is so retarded that it doesn't justify a response. Did you think about what you're saying when you said it?


Cheers,

Brendan

_________________
For all things; perfection is, and will always remain, impossible to achieve in practice. However; by striving for perfection we create things that are as perfect as practically possible. Let the pursuit of perfection be our guide.


Top
 Profile  
 
 Post subject: Re: Dodgy EDIDs (was: What does your OS look like?)
PostPosted: Sat Feb 20, 2016 1:10 am 
Offline
Member
Member
User avatar

Joined: Sat Jan 15, 2005 12:00 am
Posts: 8561
Location: At his keyboard!
Hi,

onlyonemac wrote:
DavidCooper wrote:
This will drive things in the direction of better software for blind people in the process, but it may not drive things quite as far as blind people would like it to because there might be some tasks which are possible to do through audio alone but which are so hard to do that way that any sighted person will give up and use a screen to save time. It would be useful to have a specific example to work with, and then it would be worth looking to see if Brendan's operating system is really going to get in the way of an audio interface being capable of handling the task to the satisfaction of a blind user who is keen to be able to do work which is hard but not impossible through an audio interface. I can't see any reason in principle why his operating system should cause any problems in that regard, but perhaps a clear example would show something up.
Something like office software could be a problem: formatting text, moving elements around on the page, and so on are all tasks which are difficult to do with an audio interface but which if blind people were unable to do they would be very frustrated at being unable to produce documents that look as professional as those produced by a sighted person. My screenreader will read out text formatting (such as the font name, size, colour, whether the text is bold/itallic/underlined, and whatever else I've configured it to read out) and when I'm producing a presentation in PowerPoint it reads out the size and position of the elements on the slide as I move them around (and I think there's also a key combination to read them out on demand but I never use that feature) so I can estimate where something is on the slide and get an idea of how the presentation will look. Sighted users aren't going to want to fuss with listening to descriptions of formatting and try to guess where the elements on the slides are, so chances are audio interface developers are going to leave those features out of the audio interface, thus putting blind users at a disadvantage.


Yes; developers don't do anything now (for existing OSs) because it's much much easier to use "screen reader" as an excuse to do nothing and laugh at blind people while spitting in their sad little faces. Yes; sighted users don't use audio now because it's extremely poor and marketed wrong.

Neither of these things apply to my project. They are a disservice to blind people and a disservice to all users. I do not, and never will, consider these things acceptable.

Because neither of these things apply to my project; your repeated assumption that developers won't bother providing things that both blind users and sighted users (when using the audio interface) will want is pure unsubstantiated nonsense.


Cheers,

Brendan

_________________
For all things; perfection is, and will always remain, impossible to achieve in practice. However; by striving for perfection we create things that are as perfect as practically possible. Let the pursuit of perfection be our guide.


Top
 Profile  
 
 Post subject: Re: Dodgy EDIDs (was: What does your OS look like?)
PostPosted: Sat Feb 20, 2016 6:40 am 
Offline
Member
Member

Joined: Sat Mar 01, 2014 2:59 pm
Posts: 1146
Brendan wrote:
onlyonemac wrote:
But in the process of attempting to support those input devices (which, I gather, are a minority) you are compromising the efficiency of mainstream input devices.


No, I'm not. What makes you think that?
I know you think that you're not, but in reality you are but are just unable to recognise that.
Brendan wrote:
Do you think a mouse requires a radically different user interface to a trackball? Do you think that speech recognition requires a radically different user interface to a [url=https://en.wikipedia.org/wiki/Handwriting_recognition]handwriting recognition[//url]/OCR system?
Do you think a mouse requires a radically different user interface to a handwriting recognition system? Yes, it does, but you're trying to group those all into the same set of events, and trying to map all input devices to all of the events even if those devices can't reasonably be mapped to those events.
Brendan wrote:
Do you think it's impossible to create a super-set of events, by simply combining the set of events that each type of input device provides?
No that is not impossible, but you're trying to create a sub-set of events from all input devices and then map all input devices across all of those events, without accepting any compromise for the less common/non-existent devices.
Brendan wrote:
Do you think device drivers can't emulate some/most events that were more suited to a different type of input device (e.g. use keypad to emulate mouse, use voice commands to provide up/down/left right, use a virtual keyboard to emulate key presses with mouse or touch screen, use eye-tracking to emulate mouse, etc)?
Wow, so my keypad acts like a mouse? That's so cool!!! Now I want to use it to move the insertion point in my document... oh dear, I can't, because we had to leave the actual keypad functionality out because there's no way to make the mouse act like a keypad!!!
Brendan wrote:
Sigh. Sure, if a sighted user is sitting in front of a nice large monitor they'll use that monitor. They won't install a nice large monitor into their car's windscreen so they can use it while driving. They won't glue a nice large monitor on the front of a bicycle helmet. They won't push a trolley in front of them while walking their dogs.
They also won't be trying to move the elements around on the slides in their presentation while they're walking their dogs, so developers won't put the ability to move the elements on the slides around in the audio interface, meaning that those who rely on the audio interface to move the elements around (i.e. blind people) won't be able to do that at all and so will be at a disadvantage against sighted presenters.
Brendan wrote:
Wrong. Quitting your job and/or abandoning your car so that you can use a video interface is not more efficient than just using audio to determine and/or change things like font styles while writing a novel or something where you don't care about font styles 99% of the time and only need the ability rarely.
You honestly believe that people will quit their job and abandon their car so that they can carry on working on their document? Actually they'll just stop working on the document for a while, and carry on when they've finished driving.

Also, as you quite rightly pointed out, people don't care about font styles 99% of the time, so they'll leave that task for when they've got access to a graphical interface and find something else to do, and so developers won't bother putting that functionality in the audio interface.

You seriously need get the cause and effect the right way round; in that one quote you've got it wrong twice. The first time you had people abandoning their car so they can work on a document when actually they'll stop working on the document so they can drive their car, and the second time you had people struggling to change font styles with an audio interface because of how infrequently they care about font styles when actually they'll change the font styles with a graphical interface because of how infrequently they care about font styles.
Brendan wrote:
Note that nobody (that I know of) is able to read multiple sentences simultaneously; and as they read their eyes and brain turn it into a linear sequence. Even for "3D video" you'd select a cell (e.g. focus at that depth, look at the cell) and read the cell's contents as a linear sequence.
Reading is linear, but scanning for information is not. Try finding one cell containing a particular number in a large spreadsheet when you can only hear one cell being read out at a time and see how long it takes you to find it; as a sighted person you can scan quickly around a page and take in all of the information almost in parallel.
Brendan wrote:
onlyonemac wrote:
Brendan wrote:
Yes; I'm trying to avoid limiting the ways that computers can be used; partly because there are a lot of people that can benefit now, and partly because I'm trying to defend against the future.
In trying to defend against the future, you are limiting the ways in which computers can be used currently.


Wrong. The ways in which computers can be used currently is limited by the past. By doing something less limited I defend against the future while also improving the present.
Once again you're failing to recognise how your attempts to future-proof your OS are reducing efficiency in current systems, rather than improving them. I've already told you how to future-proof your OS in a way that doesn't do this, but you just called it "retarded".
Brendan wrote:
Every successful development in the computing industry has been a direct result of technology, then economics and marketing. When technology makes something possible, it ends up being a contest between economics (price) and marketing (convincing people they want to pay that price). Users have no idea what they want until they see it (until marketing shoves it in their face and tells them to want it).
Go get a job at Microsoft, who completely ignore what their users want and use marketing to shove stuff in their users' faces and convince them how important it is that they buy it right now, then redirect all their complaints to /dev/null while you blissfully dream of the next big thing you're going to force onto your users. Or you can listen to what users want and respond accordingly.

You know why I prefer Android to Windows? Because Google listen to what their users want and give them just that. You know why I prefer Linux to Windows? Because the developers read all of the suggestions that users submit, find the most popular ones, and work on implementing them. When GNOME 3 abandoned many traditional desktop concepts and Ubuntu switched to the Unity desktop environment, the uproar was horrendous and many users wanted GNOME 2 back, so a group of developers got together and forked the original GNOME 2 project and created the MATE desktop environment. Then another group of developers got together and combined the MATE desktop environment with Ubuntu to make Ubuntu MATE, and since the first release it's become incredibly popular and many Ubuntu users are switching to it. That's what it means to listen to your users, and that's what creates true success in the computer industry. If Microsoft had taken the same approach as those two small groups of developers, working at home in their spare time, then they wouldn't be facing all of the hate that they're facing today, and sales of each new Windows release would have been "excellent" rather than "good enough" like they have been for the past few years. Remember: happy users bring profit through satisfaction; frustrated users bring profit only because developers feed off of their ignorance and naivety.
Brendan wrote:
[...] and simply didn't care that desktop users wouldn't like it.
Is that justified? Is that the right attitude to take? How do you feel about a big company simply not caring about the majority of their current user base?
Brendan wrote:
This is so retarded that it doesn't justify a response. Did you think about what you're saying when you said it?
I see you don't take kindly to a piece of classic software-development wisdom. It is well known that extendibility is a much more powerful weapon against the future than functionality, because no matter how much you implement now there will come a time when your software won't have everything and if it isn't extendible then it will never keep up. That's why Windows fell behind demands in the industry, and now Microsoft are having to turn it into a hackish kludge of conflicting layers to try to keep up. Make your software extendible now, while you're in the early stages, and you will always be able to keep up, and you won't have to try to implement functionality for everything that's going to come out in the next 50 years right now.

_________________
When you start writing an OS you do the minimum possible to get the x86 processor in a usable state, then you try to get as far away from it as possible.

Syntax checkup:
Wrong: OS's, IRQ's, zero'ing
Right: OSes, IRQs, zeroing


Top
 Profile  
 
 Post subject: Re: Dodgy EDIDs (was: What does your OS look like?)
PostPosted: Sat Feb 20, 2016 6:03 pm 
Offline
Member
Member
User avatar

Joined: Sat Jan 15, 2005 12:00 am
Posts: 8561
Location: At his keyboard!
Hi,

onlyonemac wrote:
Do you think a mouse requires a radically different user interface to a handwriting recognition system? Yes, it does, but you're trying to group those all into the same set of events, and trying to map all input devices to all of the events even if those devices can't reasonably be mapped to those events.


It does? Why? It sounds like a massive lie to me.

Choose any type of application you like and any output device you like. Then choose any 2 combinations of one or more input devices (e.g. speech alone vs. keyboard+mouse). Finally; describe in detail how that front end (for your one chosen application and your one chosen output device) must be different depending on which of 2 and only 2 combinations of input devices are being used. I expect you to anticipate my reply and try your best to find case/s that are irrefutable (including attempting to find work-arounds to any problems yourself). Choose the application and devices wisely. Assume that you have one and only one chance to prove beyond any doubt that you are not a liar.

Note: To be perfectly clear here; it's been about 4 weeks of constantly repeated but rarely substantiated assertions; and so I am trying to goad you into substantiating one of your assertions.

To avoid distractions; I shall refrain from replying to the remainder of this post until after you have attempted this little exercise.


Cheers,

Brendan

_________________
For all things; perfection is, and will always remain, impossible to achieve in practice. However; by striving for perfection we create things that are as perfect as practically possible. Let the pursuit of perfection be our guide.


Top
 Profile  
 
Display posts from previous:  Sort by  
Post new topic Reply to topic  [ 294 posts ]  Go to page Previous  1 ... 15, 16, 17, 18, 19, 20  Next

All times are UTC - 6 hours


Who is online

Users browsing this forum: Bing [Bot], klange and 11 guests


You cannot post new topics in this forum
You cannot reply to topics in this forum
You cannot edit your posts in this forum
You cannot delete your posts in this forum
You cannot post attachments in this forum

Search for:
Jump to:  
Powered by phpBB © 2000, 2002, 2005, 2007 phpBB Group