OSDev.org

The Place to Start for Operating System Developers
It is currently Tue Apr 23, 2024 4:40 pm

All times are UTC - 6 hours




Post new topic Reply to topic  [ 294 posts ]  Go to page Previous  1 ... 12, 13, 14, 15, 16, 17, 18 ... 20  Next
Author Message
 Post subject: Re: Dodgy EDIDs (was: What does your OS look like?)
PostPosted: Sat Feb 13, 2016 8:42 pm 
Offline
Member
Member

Joined: Mon Mar 25, 2013 7:01 pm
Posts: 5143
Brendan wrote:
You have a poor understanding of hue shifting. If the user can't see red it gets shifted to orange, but orange (that they could've seen) *also* gets shifted towards yellow, and yellow gets shifted a little towards green. Essentially the colours they can see get "compressed" to make room for the colours they couldn't see. It doesn't cause colours to match when they otherwise wouldn't (unless 2 colours are so close together that it's bad for everyone anyway). What it does mean is that (especially if it's excessive) things on the screen end up using colours that don't match reality - a banana looks slightly green, an orange looks a bit yellow.

It sounds like you propose trading one form of color blindness (where some hues are indistinguishable) for another (where many hues are nearly indistinguishable).

How does this work for dichromats, who can only see color in two dimensions rather than three? Hue shifting doesn't help - the only available hues are orange and blue.

(For my color blindness, a simple increase of saturation is enough to let me distinguish colors on a screen.)

Brendan wrote:
The laws of physics don't apply to computer programmers.

The laws of physics do apply to the display connected to the computer. Software can't increase the number of hues a colorblind user sees, but hardware can.


Top
 Profile  
 
 Post subject: Re: Dodgy EDIDs (was: What does your OS look like?)
PostPosted: Sun Feb 14, 2016 2:00 am 
Offline
Member
Member
User avatar

Joined: Sat Jan 15, 2005 12:00 am
Posts: 8561
Location: At his keyboard!
Hi,

Octocontrabass wrote:
Brendan wrote:
You have a poor understanding of hue shifting. If the user can't see red it gets shifted to orange, but orange (that they could've seen) *also* gets shifted towards yellow, and yellow gets shifted a little towards green. Essentially the colours they can see get "compressed" to make room for the colours they couldn't see. It doesn't cause colours to match when they otherwise wouldn't (unless 2 colours are so close together that it's bad for everyone anyway). What it does mean is that (especially if it's excessive) things on the screen end up using colours that don't match reality - a banana looks slightly green, an orange looks a bit yellow.

It sounds like you propose trading one form of color blindness (where some hues are indistinguishable) for another (where many hues are nearly indistinguishable).

How does this work for dichromats, who can only see color in two dimensions rather than three? Hue shifting doesn't help - the only available hues are orange and blue.


There are infinite numbers between 0.0001 and 0.0002. In the same way there are an infinite number of hues between blue and cyan, or between green and cyan, or between red and yellow, or between blue and violet. In the same way "blue" is not one hue, it's a category for an infinite number of hues.

Due to the way display hardware works (3 primary colours, where magnitude and not wavelength can be adjusted, and where magnitude of each primary colour typically only has possible 256 values), with only 2 primary colours you'd typically be limited to 65536 colours. Of these 65536 colours the ratio between one primary colour and the other is the same for many colours (e.g. "1:2" is the same as "50:100") and this (in a slightly crude way) can be considered different shades of the same hue and not different hues. The number of hues can (in a slightly crude way) be calculated as the number of unique ratios between the 2 primary colours.

Here's some crappy code:

Code:
double ratioTable[65536];

int main(void) {
   int i;
   int j;
   int k;
   int entries = 0;
   double ratio;
   int found;
   
   for(i = 0; i < 256; i++) {
      for(j = 0; j < 256; j++) {
         ratio = (double)i / (double)j;
         k = 0;
         found = 0;
         while(k < entries) {
            if(ratioTable[k] == ratio) {
               found = 1;
               break;
            }
            k++;
         }
         if(found == 0) {
            ratioTable[entries] = ratio;
            entries++;
         }
      }
   }
   printf("%u hues\n", entries);
}


This says "39642 hues" (or, 39642 unique ratios between 2 primary colours).

Of course this isn't accurate for a variety of reasons (not least of which being that for any specific hue there'd be a limited number of shades, and if you don't want to end up with a radically different shade just to avoid a tiny difference in hue). However, it is enough to show that "the only available hues are orange and blue" (2 hues) is an incredibly broken understatement.


Cheers,

Brendan

_________________
For all things; perfection is, and will always remain, impossible to achieve in practice. However; by striving for perfection we create things that are as perfect as practically possible. Let the pursuit of perfection be our guide.


Top
 Profile  
 
 Post subject: Re: Dodgy EDIDs (was: What does your OS look like?)
PostPosted: Sun Feb 14, 2016 4:26 am 
Offline
Member
Member

Joined: Sat Mar 01, 2014 2:59 pm
Posts: 1146
Brendan wrote:
I simply can't understand how anyone can think a screen reader can ever be ideal. It's not ideal, it's a compromise. A user interface designed specifically for audio can get much much closer to ideal (and if for some extremely unlikely cases it can't be better than a screen reader, then it can still match the experience of a screen reader and won't be worse).
I agree that a *good* audio interface (i.e. not "Hey PC, read the second paragraph of yesterday's letter to me") is ultimately better than a screenreader, but I also know that developers aren't going to write such an interface. The only incentive that developers ever have to write anything is that it will benefit the majority of users; as I've said before, the kind of audio interface that the majority of users want isn't the same as the kind of audio interface that screenreader users want - the majority of users want an audio interface that's a convenience to allow them to perform tasks in a "natural" way while doing something else at the same time (the kind of place that smartphone "personal assistants" like Siri, Cortana, etc. fill), whereas blind users want something that's quick to navigate (i.e. not voice-controlled), quick to get information from (i.e. not spending 5 seconds listening to "here you go, this is the second paragraph of your document" before everything), and offers all the functionality that sighted users get through a graphical interface (which the sighted users won't require in their audio interface, and which will probably make their audio interface too cumbersome to use). So you need *two* audio interfaces - one for sighted users who are multitasking/are on-the-go/are lazy, and another for blind users who rely on the audio interface to perform all of the tasks that sighted users perform with the graphical interface and who want to be able to perform those tasks as efficiently.

Perhaps I should post a demonstration and comparison of the two interfaces...

_________________
When you start writing an OS you do the minimum possible to get the x86 processor in a usable state, then you try to get as far away from it as possible.

Syntax checkup:
Wrong: OS's, IRQ's, zero'ing
Right: OSes, IRQs, zeroing


Top
 Profile  
 
 Post subject: Re: Dodgy EDIDs (was: What does your OS look like?)
PostPosted: Sun Feb 14, 2016 4:54 am 
Offline
Member
Member
User avatar

Joined: Sat Jan 15, 2005 12:00 am
Posts: 8561
Location: At his keyboard!
Hi,

onlyonemac wrote:
Brendan wrote:
I simply can't understand how anyone can think a screen reader can ever be ideal. It's not ideal, it's a compromise. A user interface designed specifically for audio can get much much closer to ideal (and if for some extremely unlikely cases it can't be better than a screen reader, then it can still match the experience of a screen reader and won't be worse).
I agree that a *good* audio interface (i.e. not "Hey PC, read the second paragraph of yesterday's letter to me") is ultimately better than a screenreader, but I also know that developers aren't going to write such an interface. The only incentive that developers ever have to write anything is that it will benefit the majority of users; as I've said before, the kind of audio interface that the majority of users want isn't the same as the kind of audio interface that screenreader users want - the majority of users want an audio interface that's a convenience to allow them to perform tasks in a "natural" way while doing something else at the same time (the kind of place that smartphone "personal assistants" like Siri, Cortana, etc. fill), whereas blind users want something that's quick to navigate (i.e. not voice-controlled), quick to get information from (i.e. not spending 5 seconds listening to "here you go, this is the second paragraph of your document" before everything), and offers all the functionality that sighted users get through a graphical interface (which the sighted users won't require in their audio interface, and which will probably make their audio interface too cumbersome to use). So you need *two* audio interfaces - one for sighted users who are multitasking/are on-the-go/are lazy, and another for blind users who rely on the audio interface to perform all of the tasks that sighted users perform with the graphical interface and who want to be able to perform those tasks as efficiently.

Perhaps I should post a demonstration and comparison of the two interfaces...


Perhaps you should; because I honestly can't understand why a sighted person wouldn't want all the same (quick navigation, full functionality, faster than normal speech, etc) things.

Is it just a matter of learning curves and discoverability? E.g. a casual user doesn't know about all the commands/keyboard shortcuts, while an advanced user is far more proficient because they use all the commands/shortcuts?


Cheers,

Brendan

_________________
For all things; perfection is, and will always remain, impossible to achieve in practice. However; by striving for perfection we create things that are as perfect as practically possible. Let the pursuit of perfection be our guide.


Top
 Profile  
 
 Post subject: Re: Dodgy EDIDs (was: What does your OS look like?)
PostPosted: Sun Feb 14, 2016 9:13 am 
Offline
Member
Member

Joined: Mon Mar 25, 2013 7:01 pm
Posts: 5143
Brendan wrote:
This says "39642 hues" (or, 39642 unique ratios between 2 primary colours).

There is a difference between hue and saturation. You are counting different saturations, not different hues. A dichromat cannot distinguish hues other than "orange" and "blue".

When it works normally (i.e. the viewer is not color blind), the human visual system is less sensitive to changes in saturation than changes in hue. Even though a dichromat is likely to be able to distinguish saturation better than a trichromat, it's nowhere near the point where they would be able to distinguish so many saturations.


Top
 Profile  
 
 Post subject: Re: Dodgy EDIDs (was: What does your OS look like?)
PostPosted: Sun Feb 14, 2016 9:28 am 
Offline
Member
Member
User avatar

Joined: Sat Jan 15, 2005 12:00 am
Posts: 8561
Location: At his keyboard!
Hi,

Octocontrabass wrote:
Brendan wrote:
This says "39642 hues" (or, 39642 unique ratios between 2 primary colours).

There is a difference between hue and saturation. You are counting different saturations, not different hues. A dichromat cannot distinguish hues other than "orange" and "blue".

When it works normally (i.e. the viewer is not color blind), the human visual system is less sensitive to changes in saturation than changes in hue. Even though a dichromat is likely to be able to distinguish saturation better than a trichromat, it's nowhere near the point where they would be able to distinguish so many saturations.


You're not making any sense, and seem to be flipping "hue" and "saturation" randomly.

Tell me, if you mix 50% orange with 50% blue what hue do you get? How about 25% orange and 75% blue?

Now here's a picture:
Image

Tell me which ranges of hues you can tell apart.


Cheers,

Brendan

_________________
For all things; perfection is, and will always remain, impossible to achieve in practice. However; by striving for perfection we create things that are as perfect as practically possible. Let the pursuit of perfection be our guide.


Top
 Profile  
 
 Post subject: Re: Dodgy EDIDs (was: What does your OS look like?)
PostPosted: Sun Feb 14, 2016 11:41 am 
Offline

Joined: Sat Feb 13, 2016 2:12 am
Posts: 6
removed


Last edited by removed on Mon Feb 15, 2016 5:27 pm, edited 3 times in total.

Top
 Profile  
 
 Post subject: Re: Dodgy EDIDs (was: What does your OS look like?)
PostPosted: Sun Feb 14, 2016 11:50 am 
Offline
Member
Member

Joined: Mon Mar 25, 2013 7:01 pm
Posts: 5143
Brendan wrote:
You're not making any sense, and seem to be flipping "hue" and "saturation" randomly.

I am not. "Hue" refers to the color's angle relative to the center point at neutral. "Saturation" refers to a color's distance from neutral. A dichromat can't distinguish hue beyond "in the orange direction" and "in the blue direction".

Brendan wrote:
Tell me, if you mix 50% orange with 50% blue what hue do you get? How about 25% orange and 75% blue?

Assuming a uniform perceptual color space, a dichromat will report neutral (gray) and blue, respectively.

Brendan wrote:
Now here's a picture:

Tell me which ranges of hues you can tell apart.

HSV is a terrible example. What happened to those perceptually uniform CIE colorspaces you've been using?

Now here's a picture:
Image
This is a graph of the available colors in sRGB in a uniform perceptual color space at roughly uniform luminance. The neutral point is in the middle, at coordinate (0,0). When you read this graph using polar coordinates rather than Cartesian, the angle is hue and the magnitude is saturation.

Here's another picture:
Image
This is the same graph as above, with simulated deuteranopia (the most common form of dichromacy). The neutral point is now a neutral line, and conveniently happens to be very nearly horizontal. The graph is now effectively one-dimensional, as the horizontal dimension no longer provides any useful information. If you then take this one-dimensional graph and convert to polar coordinates, hue can only take on two possible values: "orange" and "blue".


Top
 Profile  
 
 Post subject: Re: Dodgy EDIDs (was: What does your OS look like?)
PostPosted: Sun Feb 14, 2016 1:20 pm 
Offline
Member
Member
User avatar

Joined: Sat Jan 15, 2005 12:00 am
Posts: 8561
Location: At his keyboard!
Hi,

ianhamilton wrote:
Brendan wrote:
The number of developers that have read any accessibility guideline is probably less than 20%. The number that actually follow these guidelines are probably less than 2%.


I assume you don't have a source to quote for the research that led to those numbers? Of course not, they're just made up and certainly doesn't reflect any professional experience in any of the companies or agencies I've worked for. Accessibility isn't where it needs to be, but 2% is ridiculous.


The important thing is that we both understand that accessibility isn't where it needs to be.

ianhamilton wrote:
Brendan wrote:
Note that this was primarily intended for 3D games (where deaf people have an unfair disadvantage); where there is a relationship between the sounds and the size and material of (virtual) objects that caused them.


Minecraft were considering this, and decided against it, because providing accurate visualisations of where sounds came from gave deaf gamers an unfair advantage, as it was far more information than most people have through stereo speakers. So instead, they just visually indicate whether the sound is coming from the left of right.


Aren't you the same person that was arguing it isn't accurate enough yesterday?

A person using surround sound has an advantage over a person using stereo. A person using "visual sound" could have an advantage over both, or be somewhere in the middle, or still be at a disadvantage; depending on exactly how it's implemented.

ianhamilton wrote:
And no, games do not have a simple relationship between size and material of virtual object that caused them. Honestly, you've got the wrong idea about this. Sounds nice in theory, but in games, there's just too much that can go wrong with an idea like this. Take prioritisation for example, at any one time in a game you'll have a large number of simulateous sounds happening at once. Just assuming for a minute that your system will be able to separate all of these sounds out.. through audio you're able to tune out sounds that are less important and concentrate on sounds that are more important. The same needs to happen for visual representations of the sound. How will your software being able to determine that? Volume is not any kind of an indicator of importance. And if you can't figure it out, you're left with a visual cacophony of representations of irrelevant sounds.


Sounds coming from different points in the 3D world are easy to tell apart (using their origin). Sounds coming from the same point don't need to be separated. A visual cacophony would be an accurate and fair representation of an audible cacophony.

ianhamilton wrote:
Brendan wrote:
Forget about the icons and assume it's just a meaningless red dot. That alone is enough to warn a deaf user (playing a 3D game) that something is approaching quickly from behind.


And out of the ten different sounds going on behind the player, how to do you know which one is important and which isn't? The sound of the hovering machine with a gun, or the sound of the machine in the wall next to it? If you're playing by ear you'll know which sound is which through experience. Good luck trying to determine it with an algorithm.


A human player that can hear doesn't really know which of many sounds is the most important; therefore neither should a deaf person.

ianhamilton wrote:
Brendan wrote:
The user sees an icon towards the bottom left of the screen, realises that's where their messaging app is and figures out the sound must've come from the messaging app. The user sees the exact same icon to the right of the screen, realises that's where the telephony app is, and figures out the sound must've come from the telephony app.


I mean't if you're using something like what's app or skype that handles both messages and calls in the same software. It is not in any way possible for you to predict what kind of sounds will be used for different tasks in the same software by analysing their sound. You're better off just communicating the ambiguity that you have rather than trying to fudge presumed meaning.. just highlight that some kind of a sound happened there, and leave it at that.


If the same app uses very similar sounds for very different things then I hardly think it's reasonable to blame the OS's "sound visualisation" system.

ianhamilton wrote:
Brendan wrote:
You have a poor understanding of hue shifting. If the user can't see red it gets shifted to orange, but orange (that they could've seen) *also* gets shifted towards yellow, and yellow gets shifted a little towards green. Essentially the colours they can see get "compressed" to make room for the colours they couldn't see. It doesn't cause colours to match when they otherwise wouldn't (unless 2 colours are so close together that it's bad for everyone anyway). What it does mean is that (especially if it's excessive) things on the screen end up using colours that don't match reality - a banana looks slightly green, an orange looks a bit yellow.


Actually no, most daltonising algorithms shift individual colours, because they know what folly it is to try to shift all of them along a bit in the way that you're suggesting. Here's an example of an actual real world non-theoretical daltonising algorithm in effect:


What exactly do you suggest I do? Sneak into peoples houses at night and steal one of their eyes and transplant them into colour blind people during the day; so that everyone has at least one "non-colour blind" eye?

You are complaining because my solution is not able to solve unsolvable problems and is only "better" and not "impossibly perfect".

ianhamilton wrote:
[quote"Brendan"]Let me rephrase to make it a bit clearer: "You can't just compress a wide range of colours into a narrow range of colours while keeping them as easy to tell apart. It isn't physically possible".


Wrong. For dichromats if the original hues were so close that you a colour blind user can't tell them apart after hue shifting, then either the hue shifting is broken or people who aren't colour blind would've also had too much trouble telling the hues apart.

For monochomats, there's nothing an OS can do and no amount of your pointless complaining is going to change that.

ianhamilton wrote:
Brendan wrote:
We live in completely different worlds. I live in an imperfect world where developers rarely do what they should. You live in a perfect world where all developers follow accessibility guidelines rigorously and do use colourblind simulators, and graphics from multiple independent sources are never combined. While "do effectively nothing and then blame application developers" sounds much easier and might work fine for your perfect world, it's completely useless for my imperfect world.


I do this for a living, and have done so for ten years. I think I have a fairly reasonable picture of developer knowledge and uptake. I don't for a second live in some perfect world where all developers follow guidelines rigorously. I do however live in a world where if someone tells developers they no longer have to take accessibility into account on the basis the OS has fixed it for them, when the OS absolutely has not, it is harmful.


Blah, blah, blah; whatever.

I posted a few pictures to show "automatic device independent to device dependent" colour space conversion. Because of this I've spent over 3 weeks battling whiners. I do not want to spend another 3 weeks arguing with yet another whiner over the wording of one freaking paragraph within 15 pages of posts.

ianhamilton wrote:
Brendan wrote:
So you're saying they were developed specifically to avoid the hassle of bothering with anything better, years before they became a convenient way to comply with relevant accessibility laws without bothering to do anything better?


... no, I'm saying they were invented as a way to give blind people an equitable experience, before anyone had even thought about anything to do with software accessibility legislation.


I can imagine the conversation now. "Gee, we could replace the user interface in thousands of proprietary software packages and do something that's actually good in theory but is completely impractical; or we can slap a "screen reader" hack in there and expect blind people to put up with something that will never be close to ideal (but at least it'll be better than nothing)."

ianhamilton wrote:
brendan wrote:
I simply can't understand how anyone can think a screen reader can ever be ideal.


Research will help you with that. Large-scale OS development that I've been involved with has involved big-budget user research with blind users to learn about what their goals and needs are. Universal design is absolutely the right thing to have in mind, but you also need to be realistic about different user groups. Different people, different needs, different goals, different use cases.


Sure - different people, different needs, different goals, different use cases, different front ends.

ianhamilton wrote:
brendan wrote:
Anyone can grab the specifications/standards and write a new piece whenever they like, without anyone's permission and without anyone's source code. If an application has no audio based front end, anyone can add one.


This is where it comprehensively falls flat on its face. Time and time again I've seen developers contacted by blind users asking about accessibility, replying with 'that sounds cool, I'll go investigate', initially deciding to do it because they find out about screenreaders and think that all they have to do is just a nice bit of labelling and announcing and they're done, but then finding out that actually the tech they're working with doesn't have screenreader compatibility, meaning the would have to code up a whole audio front end, at which point the idea instead gets thrown in the bin because of the amount of work that would entail.


You've completely missed the point. It is not like traditional OSs, where companies rush to become the first to establish vendor lock.

Imagine companies/organisations/groups/volunteers that only ever write "audio based front ends" that comply with the established standards for various types of applications. Now; who do you think the blind user would contact when there's no front-end for something? Will the blind user contact the "group A" that creates the standard and doesn't implement any code? Will the blind user contact "completely separate and unrelated group B" that happened to write one of the 3 different back-ends (who never write any front ends)?

ianhamilton wrote:
brendan wrote:
More likely is that the design decisions that benefit blind users (like the robotic sounding synthesised voice) also benefit everyone else.


No. Have a look at the example that you gave regarding the failure to recognise different use cases in the banking websites, and apply that to what you just said here.


The use case is "user can't see". It doesn't matter why they can't see (whether they're blind or are a sighted user without a monitor), the use case is identical in both cases.

ianhamilton wrote:
That's it from me, I've spent a lot of time here now giving you free consultancy when I could have been doing other things, so I won't be coming back to the forum to check replies. I came here as a favour for a friend who wanted me to give you some help. I've given you that help, the facts are there for you to use or ignore as you wish.


You haven't given any help. You've only provided non-constructive whining without suggesting any practical way to improve on anything; and in doing so you've done nothing more than waste my time and yours.


Cheers,

Brendan

_________________
For all things; perfection is, and will always remain, impossible to achieve in practice. However; by striving for perfection we create things that are as perfect as practically possible. Let the pursuit of perfection be our guide.


Top
 Profile  
 
 Post subject: Re: Dodgy EDIDs (was: What does your OS look like?)
PostPosted: Sun Feb 14, 2016 1:24 pm 
Offline
Member
Member
User avatar

Joined: Sat Jan 15, 2005 12:00 am
Posts: 8561
Location: At his keyboard!
Hi,

Octocontrabass wrote:
Here's another picture:
Image
This is the same graph as above, with simulated deuteranopia (the most common form of dichromacy). The neutral point is now a neutral line, and conveniently happens to be very nearly horizontal. The graph is now effectively one-dimensional, as the horizontal dimension no longer provides any useful information. If you then take this one-dimensional graph and convert to polar coordinates, hue can only take on two possible values: "orange" and "blue".


This graph shows exactly what I've been saying - an "infinite" number of hues ranging from (what I'd call) "dirty yellow" to bright blue.


Cheers,

Brendan

_________________
For all things; perfection is, and will always remain, impossible to achieve in practice. However; by striving for perfection we create things that are as perfect as practically possible. Let the pursuit of perfection be our guide.


Top
 Profile  
 
 Post subject: Re: Dodgy EDIDs (was: What does your OS look like?)
PostPosted: Sun Feb 14, 2016 1:29 pm 
Offline
Member
Member

Joined: Sat Mar 01, 2014 2:59 pm
Posts: 1146
Brendan wrote:
Is it just a matter of learning curves and discoverability? E.g. a casual user doesn't know about all the commands/keyboard shortcuts, while an advanced user is far more proficient because they use all the commands/shortcuts?
No, it's more than that. A sighted user using an audio interface wants something "friendly" (think of Google Now, or Siri, or Cortana, with their "human-like" voices and methods of interaction). Sure, one could add keyboard shortcuts to something like that, but nobody will use them because if they want to be using a keyboard then they'll be sitting in front of a monitor and they'll rather use a graphical interface.

On the other hand, blind users don't want "friendly" speech; they want the fastest, most easy-to-understand, most-concise spoken output possible and the ability to get that output as quickly as possible. You won't see a blind user using voice input much more than a sighted user (except perhaps on mobile devices, because on-screen keyboards are difficult for blind users to use so we often use voice typing). For "desktop" tasks we certainly want to be able to use a keyboard and we want to be able to "read" the screen contents like a sighted user, which means we want to hear everything that's on the screen (or everything that we're currently focussing on) as quickly as possible, and to be able to navigate it as fast as possible (with a keyboard).

You can't make a one-size-fits-all audio interface with "simple" functions for everyday sighted users and "advanced" functions for blind users; they're going to conflict with each other. As Ian Hamilton quite rightly said, they're different use cases, have different requirements, and so need a different solution; there's no "one size fits all" no matter how much we wish for it. In fact, that applies to most of your OS.

I'll see about getting some recordings to you, explaining and demonstrating the difference.

_________________
When you start writing an OS you do the minimum possible to get the x86 processor in a usable state, then you try to get as far away from it as possible.

Syntax checkup:
Wrong: OS's, IRQ's, zero'ing
Right: OSes, IRQs, zeroing


Top
 Profile  
 
 Post subject: Re: Dodgy EDIDs (was: What does your OS look like?)
PostPosted: Sun Feb 14, 2016 1:45 pm 
Offline
Member
Member
User avatar

Joined: Sat Jan 15, 2005 12:00 am
Posts: 8561
Location: At his keyboard!
Hi,

onlyonemac wrote:
Brendan wrote:
Is it just a matter of learning curves and discoverability? E.g. a casual user doesn't know about all the commands/keyboard shortcuts, while an advanced user is far more proficient because they use all the commands/shortcuts?
No, it's more than that. A sighted user using an audio interface wants something "friendly" (think of Google Now, or Siri, or Cortana, with their "human-like" voices and methods of interaction). Sure, one could add keyboard shortcuts to something like that, but nobody will use them because if they want to be using a keyboard then they'll be sitting in front of a monitor and they'll rather use a graphical interface.


All of the things you've mentioned (Google Now, Siri, Cortana) are only usable for things that can be done with a "request, reply" format; and for these I doubt they're any different for a blind user at all. Frequent users (both sighted and blind) would both benefit from a few trivial tweaks (speech synthesiser parameters); and casual users (both sighted and blind) would probably prefer the "human-like" voices.

I'm mostly talking about things like editing documents in a word processor, or using a spreadsheet, or using an IDE; which can't be done with a "request, reply" format, which do have navigation.

onlyonemac wrote:
On the other hand, blind users don't want "friendly" speech; they want the fastest, most easy-to-understand, most-concise spoken output possible and the ability to get that output as quickly as possible. You won't see a blind user using voice input much more than a sighted user (except perhaps on mobile devices, because on-screen keyboards are difficult for blind users to use so we often use voice typing). For "desktop" tasks we certainly want to be able to use a keyboard and we want to be able to "read" the screen contents like a sighted user, which means we want to hear everything that's on the screen (or everything that we're currently focussing on) as quickly as possible, and to be able to navigate it as fast as possible (with a keyboard).

You can't make a one-size-fits-all audio interface with "simple" functions for everyday sighted users and "advanced" functions for blind users; they're going to conflict with each other. As Ian Hamilton quite rightly said, they're different use cases, have different requirements, and so need a different solution; there's no "one size fits all" no matter how much we wish for it. In fact, that applies to most of your OS.

I'll see about getting some recordings to you, explaining and demonstrating the difference.


They are not different use cases at all. They are all just people that can't see the app's output (for whatever reason).

Imagine if I unplugged my monitors and use my computer with no monitor at all for 6 months. Explain to me exactly how a user interface designed for this (a sighted user without any monitors) would in any way be different to a user interface designed for a blind person.


Cheers,

Brendan

_________________
For all things; perfection is, and will always remain, impossible to achieve in practice. However; by striving for perfection we create things that are as perfect as practically possible. Let the pursuit of perfection be our guide.


Top
 Profile  
 
 Post subject: Re: Dodgy EDIDs (was: What does your OS look like?)
PostPosted: Sun Feb 14, 2016 2:42 pm 
Offline
Member
Member

Joined: Sat Mar 01, 2014 2:59 pm
Posts: 1146
Brendan wrote:
casual users (both sighted and blind) would probably prefer the "human-like" voices.
For casual use a blind person may prefer a "human-like" voice, but for regular use we don't care about how realistic the voice is because we're not trying to simulate human-to-human interaction but to get information as quickly as possible, and usually a human-like voice isn't the best for that.
Brendan wrote:
I'm mostly talking about things like editing documents in a word processor, or using a spreadsheet, or using an IDE; which can't be done with a "request, reply" format, which do have navigation.
Developers aren't going to write fully-featured audio interfaces for those applications; they're going to write "convenience" interfaces for what you've termed "casual users".
Brendan wrote:
They are not different use cases at all. They are all just people that can't see the app's output (for whatever reason).
They so *are* different use cases. If a sighted person can't see the output, it's because they're multitasking or they're on-the-go, in which case they aren't likely to be performing serious tasks; I sighted person is never going to choose to use an audio interface for serious tasks over and above a graphical interface. For a blind person, there is no option to see the output so they rely on the audio interface, and need it to perform the tasks that sighted people will use the graphical interface for (and which application developers won't bother implementing in an audio interface because the majority of users - sighted users - are never going to use them).
Brendan wrote:
Imagine if I unplugged my monitors and use my computer with no monitor at all for 6 months. Explain to me exactly how a user interface designed for this (a sighted user without any monitors) would in any way be different to a user interface designed for a blind person.
I'm not going to explain how that differs from a user interface designed for a blind person because it doesn't differ; you're drawing the wrong comparison. Ignoring the fact that no sighted user would ever actually do that (unless I dared you ;-) ), they would be using the same interface that a blind user would be using. The interface that they won't be using is the one that sighted users would use for convenience.

Wait until you hear my screenreader, then you'll appreciate why:
  • I won't accept a "convenience" audio interface as a replacement
  • Sighted users won't accept it as an audio interface

_________________
When you start writing an OS you do the minimum possible to get the x86 processor in a usable state, then you try to get as far away from it as possible.

Syntax checkup:
Wrong: OS's, IRQ's, zero'ing
Right: OSes, IRQs, zeroing


Top
 Profile  
 
 Post subject: Re: Dodgy EDIDs (was: What does your OS look like?)
PostPosted: Sun Feb 14, 2016 3:45 pm 
Offline
Member
Member
User avatar

Joined: Wed Oct 27, 2010 4:53 pm
Posts: 1150
Location: Scotland
Quote:
If the user can't see red it gets shifted to orange, but orange (that they could've seen) *also* gets shifted towards yellow, and yellow gets shifted a little towards green. Essentially the colours they can see get "compressed" to make room for the colours they couldn't see.

If you're dealing with red/green colourblindness, shifting red to orange will merely provide a brighter version of the same colour, as will shifting orange to yellow, and shifting yellow towards green will provide a dimmer version of the same colour again, and given that the brightness of reds and greens can vary too, that provides no useful way whatsoever to distinguish between any of these colours, other than that the brightest ones that can be displayed must be yellow (with the red and green pixels turned up full bright).

This means that if you're going to shift all the colours into a range where they can be distinguished by a red/green colourblind person, you are working with a very narrow range of colours from green=yellow=orange=red through cyan=grey=magenta to blue, and you're either going to have to stuff all the green shades down into the cyan=grey=magenta range while keeping the reds in the green=yellow=orange=red range or stuff all the red shades down into the cyan=grey=magenta range while keeping the greens in the green=yellow=orange=red range. While you do that, the blue to cyan=grey=magenta range has to be shoved further towards the blue. The result will be far inferior to normal vision, but it will work considerably better than a greyscale display, and they can be perfectly usable - I grew up watching a black & white TV and it was rare that you'd notice that it wasn't in colour. Given that colourblind people see the real world in a colour-depleted form anyway, they aren't losing anything when they use a computer that they don't lose all the time normally, but because software is capable of helping them in a way that the outside world can't, software should offer to help, and particularly in cases where two colours are used to distinguish between two radically different things and those colours look identical to some users, but the simplest way to guard against that is to run all your software in greyscale so that you can be sure it functions well with all kinds of colourblindness.

I should also add that if a red-green colourblind person wears a cyan filter in front of one eye and a magenta one in front of the other, (s)he can distinguish between all colours, so on the odd occasions where it really matters, that is a practical solution, and it works for the outside world too.

On the issue of displaying sound visually, what I'd try is displaying it at the edges of the screen using colours to give a guide to stereo separation (in addition to showing the left channel in the left margin and the right channel in the right margin). High sounds would be displayed higher up those margins and low sounds lower, while the volumes for each pitch component of the sound would be shown using brightness. This would probably fall far short of showing enough detail for people to understand speech from it, but they might well be able to distinguish between a wide range of sounds and be able to tell the difference between different people's voices. Until we have a deaf children growing up while being exposed to this at a time when their brains are best able to adapt to that input to get the best out of it, we aren't going to know how well it would work, but you can bet they'll do better than an adult being introduced to it when it's too late for them to adapt well. Even so, I've spent a lot of time looking at wav data being displayed and I can identify a few speech sounds just from that (in real time). With a better display separating out the frequencies, it gets a lot easier. The experiments needed to see how far this can go have not been done properly yet, so far as I know, but if any experts in this know otherwise, please point me towards software that does this kind of thing so that I can see if I can distinguish sounds usefully from it. If it's written the way I would write it, I reckon I could do it quite well, despite not growing up using it.

For blind users, I want to see visual displays converted to sound. This is already being done with some software which allows blind people to see some things through sound, but it is far from being done in the best possible way. What you want is something that uses stereo and pitch to indicate screen location, and my idea would be to use long sounds to represent objects covering a wide area and short ones for ones filling only a few pixels. This could indicate where the different paragraphs and tools are located on the screen, and more detail would be provided where the cursor is located. It could also provide more information where the user's eyes are looking, and that's a reason why this is particularly urgent - children who are born blind don't learn to direct their eyes towards things, so they just wander about aimlessly, but if they were brought up using the right software from the start (with eye-tracking capability), they would be able to use their eyes as a powerful input device to computers. The longer we fail to provide this for them, the more generations of blind children will miss out on this and not develop that extremely important tool. I hope Ian Hamilton is informed about this by his friend, because he may be best placed to pass the idea on and make it happen sooner. This is all stuff I plan to do with my own OS, but the ideas are potentially too important not to share given that it's going to take me a long time to get it all done, and particularly as it isn't my top priority.

Importantly, the user should be able to tell the interface what to show them through sound most often. As I type this, there's a cluster of f***ing smilies over on the left of the screen that keep moving about and irritating me, but it would be far worse for a blind person if they had to hear that as an ongoing racket all the time while they're trying to write their post, so they need to be able to tell the OS or browser never to "show" them unless asked to. There's also a colour grid on the right which they will likewise have no interest in. What they will want is to see the paragraphs and to get a hint of what's in them so that they can navigate quickly around the text they're working with. It won't be practical to have all the items on the screen making a noise continually, but there could be occasional scans of the page which indicate the overall look of the larger features, and more common scans of the area where the user is looking, though that would be more relevant for images than text where the user is only likely to want sentences, phrases and words, and may want silence a lot of the time so that (s)he can think. The user needs to be able to tell the machine what to "show" and how often to "show" it, and this will vary depending on what it is that's being displayed. Different qualities of note could indicate colours - we're good at hearing differences between a wide range of different musical instruments, and that's only scraping the surface of all the available sound qualities. There's masses of work that needs to be done on this, and massive opportunities to make the world more accessible to blind and deaf people.

_________________
Help the people of Laos by liking - https://www.facebook.com/TheSBInitiative/?ref=py_c

MSB-OS: http://www.magicschoolbook.com/computing/os-project - direct machine code programming


Top
 Profile  
 
 Post subject: Re: Dodgy EDIDs (was: What does your OS look like?)
PostPosted: Sun Feb 14, 2016 3:47 pm 
Offline
Member
Member
User avatar

Joined: Sat Jan 15, 2005 12:00 am
Posts: 8561
Location: At his keyboard!
Hi,

onlyonemac wrote:
Brendan wrote:
casual users (both sighted and blind) would probably prefer the "human-like" voices.
For casual use a blind person may prefer a "human-like" voice, but for regular use we don't care about how realistic the voice is because we're not trying to simulate human-to-human interaction but to get information as quickly as possible, and usually a human-like voice isn't the best for that.


Yes; exactly the same as an sighted regular user would.

onlyonemac wrote:
Brendan wrote:
I'm mostly talking about things like editing documents in a word processor, or using a spreadsheet, or using an IDE; which can't be done with a "request, reply" format, which do have navigation.
Developers aren't going to write fully-featured audio interfaces for those applications; they're going to write "convenience" interfaces for what you've termed "casual users".


No; they're going to implement an audio interface. They have no reason to care if the speech synthesiser happens to be set to "fast robotic" or "slow natural" (or "lusty maiden"). It's not like they can implement an unusable "casual" interface with no navigation and none of the application's features.

onlyonemac wrote:
Brendan wrote:
They are not different use cases at all. They are all just people that can't see the app's output (for whatever reason).
They so *are* different use cases. If a sighted person can't see the output, it's because they're multitasking or they're on-the-go, in which case they aren't likely to be performing serious tasks; I sighted person is never going to choose to use an audio interface for serious tasks over and above a graphical interface. For a blind person, there is no option to see the output so they rely on the audio interface, and need it to perform the tasks that sighted people will use the graphical interface for (and which application developers won't bother implementing in an audio interface because the majority of users - sighted users - are never going to use them).


Wrong. Sighted people currently don't use audio interfaces for serious tasks simply because currently audio interfaces suck.

A truck driver who spends ~30 hours a week driving might be doing their paperwork while they drive. The old lady who spends 45 minutes each day walking her dog might be writing a romance novel. The kid riding their bike to/from school might be trying to finish homework. The guy washing dishes in the back room of a restaurant might be doing their tax return. The gardener pruning rose bushes might be searching for houses for rent and emailing them to his cousin. A professional fisherman might spend 6 hours a day writing and testing software that analyses stock market prices.

Sure it won't be everyone all the time and video interfaces will always be more popular; but that does not mean there won't be a market for audio interfaces (for both sighted *and* blind people), as long as they don't suck.

Note that things like Google Now, Siri, Cortana are proof that sighted users will use audio interfaces if they don't suck.

onlyonemac wrote:
Brendan wrote:
Imagine if I unplugged my monitors and use my computer with no monitor at all for 6 months. Explain to me exactly how a user interface designed for this (a sighted user without any monitors) would in any way be different to a user interface designed for a blind person.
I'm not going to explain how that differs from a user interface designed for a blind person because it doesn't differ; you're drawing the wrong comparison. Ignoring the fact that no sighted user would ever actually do that (unless I dared you ;-) ), they would be using the same interface that a blind user would be using. The interface that they won't be using is the one that sighted users would use for convenience.

Wait until you hear my screenreader, then you'll appreciate why:
  • I won't accept a "convenience" audio interface as a replacement
  • Sighted users won't accept it as an audio interface


You're mistakenly assuming that blind people can't be casual users, and also mistakenly assuming sighted people can't be regular users of an audio interface. You will continue to be wrong until you understand that these assumptions are nonsense.

The only reason I have to listen to your screen reader is to determine how much it does/doesn't suck (for both sighted and blind users equally).


Cheers,

Brendan

_________________
For all things; perfection is, and will always remain, impossible to achieve in practice. However; by striving for perfection we create things that are as perfect as practically possible. Let the pursuit of perfection be our guide.


Top
 Profile  
 
Display posts from previous:  Sort by  
Post new topic Reply to topic  [ 294 posts ]  Go to page Previous  1 ... 12, 13, 14, 15, 16, 17, 18 ... 20  Next

All times are UTC - 6 hours


Who is online

Users browsing this forum: Majestic-12 [Bot], SemrushBot [Bot] and 103 guests


You cannot post new topics in this forum
You cannot reply to topics in this forum
You cannot edit your posts in this forum
You cannot delete your posts in this forum
You cannot post attachments in this forum

Search for:
Jump to:  
Powered by phpBB © 2000, 2002, 2005, 2007 phpBB Group