OSDev.org

The Place to Start for Operating System Developers
It is currently Mon Mar 18, 2024 8:01 pm

All times are UTC - 6 hours




Post new topic Reply to topic  [ 294 posts ]  Go to page Previous  1 ... 13, 14, 15, 16, 17, 18, 19, 20  Next
Author Message
 Post subject: Re: Dodgy EDIDs (was: What does your OS look like?)
PostPosted: Sun Feb 14, 2016 7:11 pm 
Offline
Member
Member
User avatar

Joined: Sat Jan 15, 2005 12:00 am
Posts: 8561
Location: At his keyboard!
Hi,

DavidCooper wrote:
Quote:
If the user can't see red it gets shifted to orange, but orange (that they could've seen) *also* gets shifted towards yellow, and yellow gets shifted a little towards green. Essentially the colours they can see get "compressed" to make room for the colours they couldn't see.

If you're dealing with red/green colourblindness, shifting red to orange will merely provide a brighter version of the same colour, as will shifting orange to yellow, and shifting yellow towards green will provide a dimmer version of the same colour again, and given that the brightness of reds and greens can vary too, that provides no useful way whatsoever to distinguish between any of these colours, other than that the brightest ones that can be displayed must be yellow (with the red and green pixels turned up full bright).


I haven't done much research into the most effective "which hues to shift where for each of the different types of colour blindness". However...

To understand what I meant, here's a picture:

Image

A monitor only has 3 wavelength of light. These are at roughly 450 nm (blue), 550 nm (green) and 605 nm (red).

For someone who has no long wavelength receptors (one of the 2 different types of red-green colour blindness); this means that the monitor's blue (450 nm) is fine, the monitor's green (550 nm) is fine, and the monitors red (605 nm) ends up being perceived as dark green and is therefore completely useless.

Now imagine a transformation like:
    blue = original_blue * 0.70 + original_green * 0.30
    green = original_green * 0.40 + original_red * 0.45
    red = original_red

For someone who has no medium wavelength receptors (the other type of red-green colour blindness); the monitor's blue (450 nm) is fine, the monitors red (605 nm) is fine, and the monitor's green is perceived as a slightly brighter red than the monitor's red.

Now imagine a transformation like:
    blue = original_blue * 0.50 + original_green * 0.50
    green = original_red
    red = original_red

DavidCooper wrote:
This means that if you're going to shift all the colours into a range where they can be distinguished by a red/green colourblind person, you are working with a very narrow range of colours from green=yellow=orange=red through cyan=grey=magenta to blue, and you're either going to have to stuff all the green shades down into the cyan=grey=magenta range while keeping the reds in the green=yellow=orange=red range or stuff all the red shades down into the cyan=grey=magenta range while keeping the greens in the green=yellow=orange=red range. While you do that, the blue to cyan=grey=magenta range has to be shoved further towards the blue. The result will be far inferior to normal vision, but it will work considerably better than a greyscale display, and they can be perfectly usable - I grew up watching a black & white TV and it was rare that you'd notice that it wasn't in colour. Given that colourblind people see the real world in a colour-depleted form anyway, they aren't losing anything when they use a computer that they don't lose all the time normally, but because software is capable of helping them in a way that the outside world can't, software should offer to help, and particularly in cases where two colours are used to distinguish between two radically different things and those colours look identical to some users, but the simplest way to guard against that is to run all your software in greyscale so that you can be sure it functions well with all kinds of colourblindness.


You're writing a spreadsheet application. You've decided that the icon/logo for the application will be a 3D model of an abacus. You are responsible for choosing the colour of the abacus' beads and the colour of its frame. Your boss wants the beads and frame to be different colours (and likes bright colours).

The 3D model will be displayed on the desktop, and there are 1000 colour blind users all with completely different coloured background images. It will also be displayed in the application's help system where the background of all "help pages" depends on the user's "GUI theme" and there are 1000 colour blind users all with completely different "GUI themes".

Now tell me; what are you're testing with your "grey scale" test to ensure that all users can see the abacus' beads and frame properly?

This is the problem with "let the application developer figure it out". It simply can't work.

DavidCooper wrote:
I should also add that if a red-green colourblind person wears a cyan filter in front of one eye and a magenta one in front of the other, (s)he can distinguish between all colours, so on the odd occasions where it really matters, that is a practical solution, and it works for the outside world too.


I don't know what the best way to handle that would be (no hue shift, or a special hue shift designed for "red-green colour blind with cyan and magenta filters", or "let's give them anisotropic 3D and maybe they'll forget they're colour blind"). ;)

DavidCooper wrote:
On the issue of displaying sound visually, what I'd try is displaying it at the edges of the screen using colours to give a guide to stereo separation (in addition to showing the left channel in the left margin and the right channel in the right margin). High sounds would be displayed higher up those margins and low sounds lower, while the volumes for each pitch component of the sound would be shown using brightness.


Human hearing can tell the difference between front/back (and for home theatre/high-end gaming I'd assume surround sound speakers) but not above/below. For this reason I was thinking more like a circular disk, a bit like this (but transparent) super-imposed over the entire bottom half of the screen:
Image
The sound in 3D virtual world would be mapped to 2D coords on the disk (with the centre representing the camera).

I think ianhamilton was right in that the pitch doesn't mean much to deaf users (but wrong in that information that can be determined from the pitch can mean things to deaf users). For a simple example, something like music could be represented as notes or musical instruments that are recognisable, but would be confusing as pitch.

Of course the details would take some research and trail and error.

DavidCooper wrote:
This would probably fall far short of showing enough detail for people to understand speech from it, but they might well be able to distinguish between a wide range of sounds and be able to tell the difference between different people's voices.


Speech would be converted to readable text.

DavidCooper wrote:
For blind users, I want to see visual displays converted to sound. This is already being done with some software which allows blind people to see some things through sound, but it is far from being done in the best possible way. What you want is something that uses stereo and pitch to indicate screen location, and my idea would be to use long sounds to represent objects covering a wide area and short ones for ones filling only a few pixels. This could indicate where the different paragraphs and tools are located on the screen, and more detail would be provided where the cursor is located. It could also provide more information where the user's eyes are looking, and that's a reason why this is particularly urgent - children who are born blind don't learn to direct their eyes towards things, so they just wander about aimlessly, but if they were brought up using the right software from the start (with eye-tracking capability), they would be able to use their eyes as a powerful input device to computers. The longer we fail to provide this for them, the more generations of blind children will miss out on this and not develop that extremely important tool. I hope Ian Hamilton is informed about this by his friend, because he may be best placed to pass the idea on and make it happen sooner. This is all stuff I plan to do with my own OS, but the ideas are potentially too important not to share given that it's going to take me a long time to get it all done, and particularly as it isn't my top priority.


I'm going to need time to think about this. My first thought was "why would a blind user have a screen anyway"; but then I realised the blind user could "look" anywhere without restrictions caused by physical monitor dimensions. I'm not sure it'd be fast though - e.g. lots of "trial and error" to initially find the right place/s on a previously unknown surface, where anything that changes the position of anything (scrolling, turning pages, cut & paste, etc) would send you right back to "trial and error".

I tend to think onlyonemac's was right about tree structures being the most efficient for navigation. For something like (e.g.) Intel's manual I'd want to be able to skip from one chapter heading to the next, then move down and do the same for section headings, then move down and do the same for paragraphs, then sentences, then individual words. Programming languages are abstract syntax trees. File systems are trees. Menus are trees. My bookmarks are mostly jumbled linear list because I'm too lazy to sort them into categories properly and (as a sighted person) it's a pain in the neck trying to find anything.


Cheers,

Brendan

_________________
For all things; perfection is, and will always remain, impossible to achieve in practice. However; by striving for perfection we create things that are as perfect as practically possible. Let the pursuit of perfection be our guide.


Top
 Profile  
 
 Post subject: Re: Dodgy EDIDs (was: What does your OS look like?)
PostPosted: Sun Feb 14, 2016 7:17 pm 
Offline
Member
Member

Joined: Mon Mar 25, 2013 7:01 pm
Posts: 5069
Brendan wrote:
This graph shows exactly what I've been saying - an "infinite" number of hues ranging from (what I'd call) "dirty yellow" to bright blue.

Quote:
hue

(noun)

the attribute of a color by virtue of which it is discernible as red, green, etc., and which is dependent on its dominant wavelength, and independent of intensity or lightness.

The graph shows an "infinite" number of intensities (saturations) with only two hues. It seems the problem here is one of terminology; I assumed you were using the commonly-accepted definition of hue.


Top
 Profile  
 
 Post subject: Re: Dodgy EDIDs (was: What does your OS look like?)
PostPosted: Sun Feb 14, 2016 8:08 pm 
Offline
Member
Member
User avatar

Joined: Sat Jan 15, 2005 12:00 am
Posts: 8561
Location: At his keyboard!
Hi,

Octocontrabass wrote:
Brendan wrote:
This graph shows exactly what I've been saying - an "infinite" number of hues ranging from (what I'd call) "dirty yellow" to bright blue.

Quote:
hue

(noun)

the attribute of a color by virtue of which it is discernible as red, green, etc., and which is dependent on its dominant wavelength, and independent of intensity or lightness.

The graph shows an "infinite" number of intensities (saturations) with only two hues. It seems the problem here is one of terminology; I assumed you were using the commonly-accepted definition of hue.


I'm using "hue" as "discernible difference in colour", because it's the discernible difference in colour that matters and "hue" is just a pointless/irrelevant distraction (word games). For exactly the same lightness, the graph (from top to bottom) goes "yellow to grey to blue" with an infinite number of colours for each lightness (with an infinite number of "lightness-ess").

This is the Lab colour space, "lightness" is not shown (it is in the Z direction - e.g. white towards you).


Cheers,

Brendan

_________________
For all things; perfection is, and will always remain, impossible to achieve in practice. However; by striving for perfection we create things that are as perfect as practically possible. Let the pursuit of perfection be our guide.


Top
 Profile  
 
 Post subject: Re: Dodgy EDIDs (was: What does your OS look like?)
PostPosted: Sun Feb 14, 2016 9:38 pm 
Offline
Member
Member
User avatar

Joined: Wed Oct 27, 2010 4:53 pm
Posts: 1150
Location: Scotland
Brendan wrote:
For someone who has no long wavelength receptors (one of the 2 different types of red-green colour blindness); this means that the monitor's blue (450 nm) is fine, the monitor's green (550 nm) is fine, and the monitors red (605 nm) ends up being perceived as dark green and is therefore completely useless.

Now imagine a transformation like:
    blue = original_blue * 0.70 + original_green * 0.30
    green = original_green * 0.40 + original_red * 0.45
    red = original_red

For someone who has no medium wavelength receptors (the other type of red-green colour blindness); the monitor's blue (450 nm) is fine, the monitors red (605 nm) is fine, and the monitor's green is perceived as a slightly brighter red than the monitor's red.

Now imagine a transformation like:
    blue = original_blue * 0.50 + original_green * 0.50
    green = original_red
    red = original_red

Well, the test would be to produce pictures to illustrate those and see how useful the range is once you've packed it like that. In the first case you will have magenta and green looking the same, and in the second case it looks as if you'll lose the ability to distinguish between blue and green.

Quote:
Now tell me; what are you're testing with your "grey scale" test to ensure that all users can see the abacus' beads and frame properly?

By checking the program in greyscale, you're getting programmers to make sure that in the worst possible case where a person can't see any colours at all but merely a shades of grey, the program will still be usable for them (and will therefore work with all colourblind people too). If you design a program where the ability to distinguish between a wide range of colours is crucial for its operation, compressing the range (or eliminating colour altogether) may render the program unusable to some users because the differences for them (even after you've tweaked them to make it as easy as possible for them as individuals) are no longer sufficiently obvious. If it can't be done in greyscale because there are too many similar shades that can't be distinguished adequately, no amount of colour/shade tweaking will fix that for them, but the programmer who works in full colour will be blissfully unaware of that. The same could apply for someone with less severe colourblindness. Our eyes can distinguish between millions of shades, but that doesn't mean we can tell them apart when we only see one of them at a time - the differences show up when you put them side by side, but in a series of coloured buttons you could have dozens of them that look green and you don't know which of those buttons do what because you can't actually tell which are which, even if you can say that one button is a very slightly different tint from another one near it. If you've ever tried to make sense of a diagram where lines are plotted in different colours, you'll find that once there are more than a dozen or so, there will be some that you find near impossible to tell apart to match them up with the key - that's how bad we really are at telling colours apart.

Quote:
DavidCooper wrote:
I should also add that if a red-green colourblind person wears a cyan filter in front of one eye and a magenta one in front of the other, (s)he can distinguish between all colours, so on the odd occasions where it really matters, that is a practical solution, and it works for the outside world too.


I don't know what the best way to handle that would be (no hue shift, or a special hue shift designed for "red-green colour blind with cyan and magenta filters", or "let's give them anisotropic 3D and maybe they'll forget they're colour blind"). ;)

They might have to open and close one eye repeatedly to see the differences properly when using filters, but it would allow them to tell which button is red and which is green, and that could stop them deleting something by mistake. [Of course, all deletes should be undeletable, but that's another design issue.]

Quote:
Human hearing can tell the difference between front/back (and for home theatre/high-end gaming I'd assume surround sound speakers) but not above/below.

Be aware that people can only tell front/back differences by turning their head slightly - if you lock their head still, the ability is lost. That means that a short sound cannot be placed with precision, but a longer lasting sound can.

Quote:
I think ianhamilton was right in that the pitch doesn't mean much to deaf users (but wrong in that information that can be determined from the pitch can mean things to deaf users). For a simple example, something like music could be represented as notes or musical instruments that are recognisable, but would be confusing as pitch.

I'm not sure he was right about that - deaf people can feel vibrations and detect pitch differences from that feel, and if they grow up seeing sound represented visually with high pitch higher up a screen, they'll make all the right associations. Taking someone who's always been deaf and who's never been exposed to software that makes pitch meaningful will take much longer to adapt to the idea, so they aren't the best people to tell you what's useful and what isn't.

Quote:
DavidCooper wrote:
This would probably fall far short of showing enough detail for people to understand speech from it, but they might well be able to distinguish between a wide range of sounds and be able to tell the difference between different people's voices.


Speech would be converted to readable text.

Certainly, but there may be a visual way of displaying sound that's easier for deaf people to read if they grow up with it. Providing phonetic text may be much more efficient for them if they're allowed to use that instead of our ridiculous standard English spellings (which make it harder to learn to lipread), and it's quite possible that if you take a sound wave and split it into its component frequencies and their volumes, and then do more work on the result to make the phonemes stand out, it might be possible for them to understand the end signal and have enough extra information carried in it to be able to identify accents and to recognise the person talking at the same time just from the look of the sounds they're using. Again this is something that has not been tried out adequately to find out what the limits of possibility are. If they grow up using such a system, it could be both easy and powerful for them, providing them with much more information and better efficiency without any downside.

Quote:
I tend to think onlyonemac's was right about tree structures being the most efficient for navigation. For something like (e.g.) Intel's manual I'd want to be able to skip from one chapter heading to the next, then move down and do the same for section headings, then move down and do the same for paragraphs, then sentences, then individual words. Programming languages are abstract syntax trees. File systems are trees. Menus are trees. My bookmarks are mostly jumbled linear list because I'm too lazy to sort them into categories properly and (as a sighted person) it's a pain in the neck trying to find anything.

If I'm reading a book and want to look back a few pages for a bit of information that's important, I can find it quickly because I have a visual memory of where it was on the page and what the arrangement of paragraphs looked like. That doesn't work so well with scrollable text (which I hate for that reason), but it means that the look of the data and location on the screen is useful. A blind person who doesn't get that extra part of the information has more work to do to find something they read a few minutes before. Often I can go to a book and flip through the pages to find something written in it which I can find from my mental map of the layout of that book in terms of the illustrations and tables in it. With an electronic book it's possible to search for terms that might appear in it, but that often doesn't help. Having that visualisation of the layout of the book is extremely helpful, and it should be possible to provide that for blind people too - even if they've always been blind, they can still imagine their way around places by the layout. Once machines understand the meaning of all the text, this may not be so important as you'll be able to ask directly for the information you're looking for and the machine will take you straight to it even if the wording doesn't match up at all, so it may turn out that having a visual understanding of the layout of texts isn't going to be at all important in the future, but is is useful now and might always be. When you're writing something, the same applies. If the machine can tell a blind person where a paragraph is, they can look at the part of that paragraph where they think the thing they're looking for was written and the machine can start reading from there - the fact that they can't actually see anything doesn't matter, but with the machine responding in this way, it's almost as if they can see.

I can imagine them looking at a page with text and diagrams and hearing where the blocks of text and the tables are, and one picture might sound like a graph. They look more closely at the graph and they hear a noise tracing out the line that's drawn on it, the sound moving from left to right and going up and down in pitch. They look away and it stops. They look back and hear it again. They look down and it tells them what the horizontal axis represents, etc.

_________________
Help the people of Laos by liking - https://www.facebook.com/TheSBInitiative/?ref=py_c

MSB-OS: http://www.magicschoolbook.com/computing/os-project - direct machine code programming


Top
 Profile  
 
 Post subject: Re: Dodgy EDIDs (was: What does your OS look like?)
PostPosted: Mon Feb 15, 2016 3:49 am 
Offline

Joined: Sat Feb 13, 2016 2:12 am
Posts: 6
removed


Last edited by removed on Mon Feb 15, 2016 5:26 pm, edited 2 times in total.

Top
 Profile  
 
 Post subject: Re: Dodgy EDIDs (was: What does your OS look like?)
PostPosted: Mon Feb 15, 2016 4:00 am 
Offline
Member
Member

Joined: Mon Jan 03, 2011 6:58 pm
Posts: 283
DavidCooper wrote:
Be aware that people can only tell front/back differences by turning their head slightly - if you lock their head still, the ability is lost. That means that a short sound cannot be placed with precision, but a longer lasting sound can.


I just want to quickly point something out. What you said is true... less any assumptions. In 3D games(I play FPS from time to time) a lot can be "surmised"/assumed, and the brain is very good at filling in those gaps. So a consistent, and logical, use of sounds can allow a user's brain to fil in the gaps.

- Monk

P.S. The rest of what I read seems logical, but outside my area of experience ;)


Top
 Profile  
 
 Post subject: Re: Dodgy EDIDs (was: What does your OS look like?)
PostPosted: Mon Feb 15, 2016 4:22 am 
Offline

Joined: Sat Feb 13, 2016 2:12 am
Posts: 6
removed


Last edited by removed on Mon Feb 15, 2016 5:25 pm, edited 2 times in total.

Top
 Profile  
 
 Post subject: Re: Dodgy EDIDs (was: What does your OS look like?)
PostPosted: Mon Feb 15, 2016 4:40 am 
Offline
Member
Member
User avatar

Joined: Wed Oct 18, 2006 3:45 am
Posts: 9301
Location: On the balcony, where I can actually keep 1½m distance
ianhamilton wrote:
Take red text on a black background. Using the greyscale test, that would look fine. However, red on black is actually a very common complaint from protanopes. If you're red deficient, red appears much darker for you, more like a dark brown colour.
So how does this work in detail? I'd have expected that rods would at least be able to determine luminance in the general case, and thus that pure red on black would effectively preview as ~33% over 0% in luminance and still be legible as a result - be it suboptimal.

Of course, for contrast you'd want to have a more saturated red on dark backgrounds (and thus include bits of blue and green) and a greyscale test would certainly show that the contrast is poor for pure colours over black in general. So where does the process in its entirety go wrong with red deficients over a full lack of colour reception? Designers that stretch rules to pass their designs?

_________________
"Certainly avoid yourself. He is a newbie and might not realize it. You'll hate his code deeply a few years down the road." - Sortie
[ My OS ] [ VDisk/SFS ]


Top
 Profile  
 
 Post subject: Re: Dodgy EDIDs (was: What does your OS look like?)
PostPosted: Mon Feb 15, 2016 4:57 am 
Offline

Joined: Sat Feb 13, 2016 2:12 am
Posts: 6
removed


Last edited by removed on Mon Feb 15, 2016 5:25 pm, edited 1 time in total.

Top
 Profile  
 
 Post subject: Re: Dodgy EDIDs (was: What does your OS look like?)
PostPosted: Mon Feb 15, 2016 7:50 am 
Offline
Member
Member

Joined: Mon Mar 25, 2013 7:01 pm
Posts: 5069
Combuster wrote:
ianhamilton wrote:
Take red text on a black background. Using the greyscale test, that would look fine. However, red on black is actually a very common complaint from protanopes. If you're red deficient, red appears much darker for you, more like a dark brown colour.
So how does this work in detail? I'd have expected that rods would at least be able to determine luminance in the general case, and thus that pure red on black would effectively preview as ~33% over 0% in luminance and still be legible as a result - be it suboptimal.

Rods are insensitive to red light.


Top
 Profile  
 
 Post subject: Re: Dodgy EDIDs (was: What does your OS look like?)
PostPosted: Mon Feb 15, 2016 8:34 am 
Offline
Member
Member
User avatar

Joined: Sat Jan 15, 2005 12:00 am
Posts: 8561
Location: At his keyboard!
Hi,

DavidCooper wrote:
Brendan wrote:
For someone who has no long wavelength receptors (one of the 2 different types of red-green colour blindness); this means that the monitor's blue (450 nm) is fine, the monitor's green (550 nm) is fine, and the monitors red (605 nm) ends up being perceived as dark green and is therefore completely useless.

Now imagine a transformation like:
    blue = original_blue * 0.70 + original_green * 0.30
    green = original_green * 0.40 + original_red * 0.45
    red = original_red

For someone who has no medium wavelength receptors (the other type of red-green colour blindness); the monitor's blue (450 nm) is fine, the monitors red (605 nm) is fine, and the monitor's green is perceived as a slightly brighter red than the monitor's red.

Now imagine a transformation like:
    blue = original_blue * 0.50 + original_green * 0.50
    green = original_red
    red = original_red

Well, the test would be to produce pictures to illustrate those and see how useful the range is once you've packed it like that. In the first case you will have magenta and green looking the same, and in the second case it looks as if you'll lose the ability to distinguish between blue and green.


Yes; but the lightness would become more correct. For a worst case; imagine an underground area lit with bright red lights where the source image consists of shades of red. For someone who has no long wavelength receptors, instead of seeing "everything black" they see a difference between lit and unlit.

Essentially; we're converting a 3 dimensional colour space (CIE XYZ) down to a 2 dimensional colour space (for dichromats) or a 1 dimensional space (for monochromats). Some information must be lost, the only question is which information. Getting "light and dark" right is important because that's what humans use to infer the shape and location of objects (which is especially important for the "3D graphics displayed on 2D screen" case); and getting "light and dark" right is completely possible in all cases. Colour/hue isn't as important, and is mostly a lost cause anyway (it's impossible to make colour blind people see colour the same as people with normal vision).

DavidCooper wrote:
Quote:
Now tell me; what are you're testing with your "grey scale" test to ensure that all users can see the abacus' beads and frame properly?

By checking the program in greyscale, you're getting programmers to make sure that in the worst possible case where a person can't see any colours at all but merely a shades of grey, the program will still be usable for them (and will therefore work with all colourblind people too).


My point is that you'd be comparing "known shade of grey to unknowable shade of grey" to see if they're too close. The problem is in that "unknowable" part. It's effectively the equivalent of "find a value for X where X != Y, for all values of Y". It's impossible. It can not work.

For something where a developer is responsible for the entire scene and all colours can be known, it is possible. For my OS, this rarely happens. A scene may be any background picture from the Internet, a GUI theme from user preferences, 50 different 3D models from 50 different/separate artists/companies that have never met, lighting/shadows that changes in both location and colour throughout the day, a camera that moves and makes it impossible to assume any object will be in front of any other object, etc.

DavidCooper wrote:
Quote:
Human hearing can tell the difference between front/back (and for home theatre/high-end gaming I'd assume surround sound speakers) but not above/below.

Be aware that people can only tell front/back differences by turning their head slightly - if you lock their head still, the ability is lost. That means that a short sound cannot be placed with precision, but a longer lasting sound can.


The shape of (the external part of) the human ear means that sounds from behind you end up different to sounds in front of you.

Of course it does depend on a whole lot of other factors (e.g. its much harder to determine the location of lower frequency sounds, things like echo/reflection make it harder, etc).

DavidCooper wrote:
Quote:
I think ianhamilton was right in that the pitch doesn't mean much to deaf users (but wrong in that information that can be determined from the pitch can mean things to deaf users). For a simple example, something like music could be represented as notes or musical instruments that are recognisable, but would be confusing as pitch.

I'm not sure he was right about that - deaf people can feel vibrations and detect pitch differences from that feel, and if they grow up seeing sound represented visually with high pitch higher up a screen, they'll make all the right associations. Taking someone who's always been deaf and who's never been exposed to software that makes pitch meaningful will take much longer to adapt to the idea, so they aren't the best people to tell you what's useful and what isn't.


I didn't think of being able to "feel" sound. You're right; and using height to reflect pitch should help.

That would mean "location in plane" to represent location relative to camera, "height above plane" to represent pitch, intensity to represent amplitude, and various icons to give clues about characteristics (attack, sustain, decay, release).

DavidCooper wrote:
Quote:
Speech would be converted to readable text.

Certainly, but there may be a visual way of displaying sound that's easier for deaf people to read if they grow up with it. Providing phonetic text may be much more efficient for them if they're allowed to use that instead of our ridiculous standard English spellings (which make it harder to learn to lipread), and it's quite possible that if you take a sound wave and split it into its component frequencies and their volumes, and then do more work on the result to make the phonemes stand out, it might be possible for them to understand the end signal and have enough extra information carried in it to be able to identify accents and to recognise the person talking at the same time just from the look of the sounds they're using. Again this is something that has not been tried out adequately to find out what the limits of possibility are. If they grow up using such a system, it could be both easy and powerful for them, providing them with much more information and better efficiency without any downside.


Phonetics will be important for a lot of things anyway (e.g. speech synthesis); and support for something like the International Phonetic Alphabet would be nearly impossible to avoid.

One of the other things I want to do is replace the current practice in games (hiring voice actors and taking recordings) with something that doesn't completely break collaboration ("modding"); and do the same for things like tutorial videos, etc. Essentially; wherever possible, replace audio recordings with something based on "phonetics with markup". This would also make it much easier (and more efficient) to do the "sound visualisation" stuff (e.g. display the "phonetics with markup" and avoid speech recognition).

Of course this is an area I haven't even begun researching yet. I don't know how possible it would be to create a suitable "phonetics with markup" written language that is expressive enough (I'm not convinced IPA itself is flexible enough alone); and don't know how hard it'd be to create a speech synthesiser with high enough quality.

DavidCooper wrote:
Quote:
I tend to think onlyonemac's was right about tree structures being the most efficient for navigation. For something like (e.g.) Intel's manual I'd want to be able to skip from one chapter heading to the next, then move down and do the same for section headings, then move down and do the same for paragraphs, then sentences, then individual words. Programming languages are abstract syntax trees. File systems are trees. Menus are trees. My bookmarks are mostly jumbled linear list because I'm too lazy to sort them into categories properly and (as a sighted person) it's a pain in the neck trying to find anything.

If I'm reading a book and want to look back a few pages for a bit of information that's important, I can find it quickly because I have a visual memory of where it was on the page and what the arrangement of paragraphs looked like. That doesn't work so well with scrollable text (which I hate for that reason), but it means that the look of the data and location on the screen is useful. A blind person who doesn't get that extra part of the information has more work to do to find something they read a few minutes before. Often I can go to a book and flip through the pages to find something written in it which I can find from my mental map of the layout of that book in terms of the illustrations and tables in it. With an electronic book it's possible to search for terms that might appear in it, but that often doesn't help. Having that visualisation of the layout of the book is extremely helpful, and it should be possible to provide that for blind people too - even if they've always been blind, they can still imagine their way around places by the layout.


I think "visualisation" isn't quite the right word; and there's something more fundamental - a mental model constructed by a person from all available sensory input. This mental model works well for physical things, but doesn't work for anything intangible, and a lot of things are intangible.

As a sighted person; my mental model of Intel's manuals is much closer to a tree than anything else. My mental model of mathematical equations is more like a chronological sequence of steps in "operator first" order - e.g. "x = (y**0.7 + z) / 7" somehow ends up more like "x = (** y 0.7), (+ z), (/ 7)", and "d = sqrt(x*x + y*y + z*z)" ends up as "? = (* x x); ? = (* y y); ? = (* z z); d = (+ ? ? ?), (sqrt)". Note that this may have been caused by prolonged use of assembly language.

What I'm saying is that I don't think "visualisation of data on 2D plane/s" is good for sighted users (especially for intangible things); and is an unnecessary limitation that has carried over from "non-interactive" historical representations (paper). It's one of the reasons I'm in favour of abandoning "plain text" for programming, and one of the reasons I'm going for "interactive 3D" user interfaces (for sighted people). Blindness wouldn't make "visualisation of data on 2D plane/s" less unsuitable.


Cheers,

Brendan

_________________
For all things; perfection is, and will always remain, impossible to achieve in practice. However; by striving for perfection we create things that are as perfect as practically possible. Let the pursuit of perfection be our guide.


Top
 Profile  
 
 Post subject: Re: Dodgy EDIDs (was: What does your OS look like?)
PostPosted: Mon Feb 15, 2016 9:09 am 
Offline
Member
Member

Joined: Sat Mar 01, 2014 2:59 pm
Posts: 1146
Brendan wrote:
Wrong. Sighted people currently don't use audio interfaces for serious tasks simply because currently audio interfaces suck.

A truck driver who spends ~30 hours a week driving might be doing their paperwork while they drive. The old lady who spends 45 minutes each day walking her dog might be writing a romance novel. The kid riding their bike to/from school might be trying to finish homework. The guy washing dishes in the back room of a restaurant might be doing their tax return. The gardener pruning rose bushes might be searching for houses for rent and emailing them to his cousin. A professional fisherman might spend 6 hours a day writing and testing software that analyses stock market prices.
I consider all those use cases very reasonable, but there's something that they have in common with each other that they don't have in common with a blind person working at a desk in an office: those people are all doing something else as well. That means, they're out somewhere with a portable device and they're using their hands and eyes to perform one task, so they can simultaneously use their voice and ears to perform another task. They'll all be giving voice commands, listening to the output, giving another command, listening to more output, and so on. They don't press one key or click one mouse button to perform those tasks. They're not going to use those same audio interfaces when they're at home/in the office with their desktop or laptop computer, where they've got a keyboard, mouse, and monitor. But you're forcing blind users with a desktop/laptop, where they've got a keyboard, to use an audio interface that's designed for voice control on portable devices. What that means is that application developers will design their audio interface to work well with voice control, but aren't going to bother with keyboard control because all of your "sighted user" use cases are use cases where a keyboard will never be used, and you're expecting blind users to use the same interface which simply hasn't been designed for desktop use because the developer thought "I'm not going to bother making this audio interface work with a keyboard because if there's a keyboard available chances are you're sitting at a desk and you're going to use the graphical interface".
Brendan wrote:
You're mistakenly assuming that blind people can't be casual users, and also mistakenly assuming sighted people can't be regular users of an audio interface. You will continue to be wrong until you understand that these assumptions are nonsense.
I never said that blind people can't be casual users; what I said is that they don't want to be restricted to being casual users because the advanced features of an application are only available in the graphical interface. I also never said that sighted people can't be regular users of audio interfaces; what I said is that they aren't going to use an audio interface in the same situations that blind people are required to use audio interfaces, so aren't going to place the same demands on the interface that blind users will (e.g. a sighted user isn't going to care if they can't change the font size of their document with the audio interface because they'll do that later when they've got a monitor available, but a blind user who's trying to produce a professional-looking document certainly will care).
Brendan wrote:
The only reason I have to listen to your screen reader is to determine how much it does/doesn't suck (for both sighted and blind users equally).
I've already told you that it sucks; I've also told you that your proposed solution to it sucking isn't going to stop it from sucking.

_________________
When you start writing an OS you do the minimum possible to get the x86 processor in a usable state, then you try to get as far away from it as possible.

Syntax checkup:
Wrong: OS's, IRQ's, zero'ing
Right: OSes, IRQs, zeroing


Top
 Profile  
 
 Post subject: Re: Dodgy EDIDs (was: What does your OS look like?)
PostPosted: Mon Feb 15, 2016 9:22 am 
Offline
Member
Member
User avatar

Joined: Sat Jan 15, 2005 12:00 am
Posts: 8561
Location: At his keyboard!
Hi,

Combuster wrote:
ianhamilton wrote:
Take red text on a black background. Using the greyscale test, that would look fine. However, red on black is actually a very common complaint from protanopes. If you're red deficient, red appears much darker for you, more like a dark brown colour.
So how does this work in detail? I'd have expected that rods would at least be able to determine luminance in the general case, and thus that pure red on black would effectively preview as ~33% over 0% in luminance and still be legible as a result - be it suboptimal.


Rods are only used for very low light conditions. Under normal lighting conditions they're stuck at max. and do nothing. For this reason they're virtually always ignored for any colour/light modelling and research (which is something I consider "unfortunate" given that it should probably be taken into account if you're planning a HDR/"auto-iris emulation" system).

To understand colour you need to understand the way a human eye works, which means understanding this diagram:

Image

You can think of this as 3 types of sensors that create 3 analogue inputs to a human's brain. Let's call these analogue signals A, B and C. The white curves in that picture represent the frequency response of each of the types of sensors (the signal strength you'd get from each sensor for each frequency of monochromatic light). The brain decodes the 3 signals and doesn't have any idea which wavelength/s caused them. This is why, with only 3 primary colours at fixed wavelengths, you can trick the brain into thinking it's seeing monochomatic light at most wavelengths. Note that the ranges of wavelengths that each type of sensor detects overlaps with the ranges other types of sensors detect. Light at (e.g.) 560 nm ends up as "roughly equal B signal and C signal strengths".

Colour blindness is typically the result of one (or more) of the 3 types of sensors failing (and taking one of the analogue signals with it). Because the sensor's ranges overlap, with only 2 types of sensors the human brain can still differentiate some wavelengths of light. Without the long wavelength sensors you'd see all light from ~530 nm to ~650 nm as "C signal" (just with different signal strengths) and wouldn't be able to tell them apart, but light from 375 nm to 530 nm would be a different ratios of "A signal and B signal strengths" and you would be able to tell the wavelengths apart.

However; it's not this simple.

Real light isn't monochromatic. White light ("all wavelengths" in theory) needs a minimum of 3 wavelengths. A colour like purple requires a minimum of 2 wavelengths.

In addition; colour blindness isn't that simple either as there's also "partial failures" - rather than having no long wavelength cones at all; someone might just have a reduced number of long wavelength cones; and instead of not being able to see light in the 650 nm to 700 nm range they might see it at reduced strength.

Finally; we're stuck with the limitations of computer displays, which typically only have 3 wavelengths of light. This severely limits "potential work-arounds" (including work-arounds for "reduced gamut" for people with normal vision). Ideally it'd be nice if we could control the analogue "A, B and C" signals directly. Sadly the response curves for the eye's sensors don't allow that.


Cheers,

Brendan

_________________
For all things; perfection is, and will always remain, impossible to achieve in practice. However; by striving for perfection we create things that are as perfect as practically possible. Let the pursuit of perfection be our guide.


Top
 Profile  
 
 Post subject: Re: Dodgy EDIDs (was: What does your OS look like?)
PostPosted: Mon Feb 15, 2016 10:06 am 
Offline
Member
Member
User avatar

Joined: Sat Jan 15, 2005 12:00 am
Posts: 8561
Location: At his keyboard!
Hi,

onlyonemac wrote:
Brendan wrote:
Wrong. Sighted people currently don't use audio interfaces for serious tasks simply because currently audio interfaces suck.

A truck driver who spends ~30 hours a week driving might be doing their paperwork while they drive. The old lady who spends 45 minutes each day walking her dog might be writing a romance novel. The kid riding their bike to/from school might be trying to finish homework. The guy washing dishes in the back room of a restaurant might be doing their tax return. The gardener pruning rose bushes might be searching for houses for rent and emailing them to his cousin. A professional fisherman might spend 6 hours a day writing and testing software that analyses stock market prices.
I consider all those use cases very reasonable, but there's something that they have in common with each other that they don't have in common with a blind person working at a desk in an office: those people are all doing something else as well. That means, they're out somewhere with a portable device and they're using their hands and eyes to perform one task, so they can simultaneously use their voice and ears to perform another task. They'll all be giving voice commands, listening to the output, giving another command, listening to more output, and so on. They don't press one key or click one mouse button to perform those tasks. They're not going to use those same audio interfaces when they're at home/in the office with their desktop or laptop computer, where they've got a keyboard, mouse, and monitor. But you're forcing blind users with a desktop/laptop, where they've got a keyboard, to use an audio interface that's designed for voice control on portable devices. What that means is that application developers will design their audio interface to work well with voice control, but aren't going to bother with keyboard control because all of your "sighted user" use cases are use cases where a keyboard will never be used, and you're expecting blind users to use the same interface which simply hasn't been designed for desktop use because the developer thought "I'm not going to bother making this audio interface work with a keyboard because if there's a keyboard available chances are you're sitting at a desk and you're going to use the graphical interface".


What I'm doing is focusing on the output side of things (and neglecting the input side), where the output is virtually identical (sans speech synth parameters).

For input; it's just events sent to the front end. For example, the front-end might receive an "UP" event, or a "NEXT_CHILD" event, or an "ESCAPE" event. The developer of the front end doesn't have any reason to care if these events are coming from a keyboard or speech recognition or anything else.

onlyonemac wrote:
Brendan wrote:
You're mistakenly assuming that blind people can't be casual users, and also mistakenly assuming sighted people can't be regular users of an audio interface. You will continue to be wrong until you understand that these assumptions are nonsense.
I never said that blind people can't be casual users; what I said is that they don't want to be restricted to being casual users because the advanced features of an application are only available in the graphical interface. I also never said that sighted people can't be regular users of audio interfaces; what I said is that they aren't going to use an audio interface in the same situations that blind people are required to use audio interfaces, so aren't going to place the same demands on the interface that blind users will (e.g. a sighted user isn't going to care if they can't change the font size of their document with the audio interface because they'll do that later when they've got a monitor available, but a blind user who's trying to produce a professional-looking document certainly will care).


There's relevant research into this phenomenon.

onlyonemac wrote:
Brendan wrote:
The only reason I have to listen to your screen reader is to determine how much it does/doesn't suck (for both sighted and blind users equally).
I've already told you that it sucks; I've also told you that your proposed solution to it sucking isn't going to stop it from sucking.


If you think blind people don't deserve anything good; then you could just implement a "generic audio front end on top of any visual front end" (a screen reader) yourself instead of using the audio front ends that were designed specifically for each application.


Cheers,

Brendan

_________________
For all things; perfection is, and will always remain, impossible to achieve in practice. However; by striving for perfection we create things that are as perfect as practically possible. Let the pursuit of perfection be our guide.


Top
 Profile  
 
 Post subject: Re: Dodgy EDIDs (was: What does your OS look like?)
PostPosted: Mon Feb 15, 2016 2:12 pm 
Offline
Member
Member
User avatar

Joined: Wed Oct 27, 2010 4:53 pm
Posts: 1150
Location: Scotland
Thanks Ian for directing us all to the software for checking things for compatibility with all the different kinds of colourblindness, and in particular for flagging up the issue with red on black. Octocontrabass's point about rods not detecting red neatly spells out why this is such an issue. This is clearly something that all programmers should be informed about.

On the issue of front-back hearing, yes it's possible to tell in some situations whether something is behind you or ahead even if you can't move your head, but a proper scientific experiment into this (which I heard described just a couple of weeks ago on a science radio programme with Dr. Karl Kruszelnicki) found that when people's heads were held still, they lost the ability to tell whether sounds were coming from ahead or behind. However, they may have been using simple sounds without harmonics which meant they changed less when taking different paths into the ears, and the sounds may also have been different each time instead of repeating the same sound every time. With complex sounds, I'd expect the higher frequencies to be damped more than low frequency components when you play them from behind. If you are already adjusting the relative strengths of different frequencies in order to make quality differences, that would add to the difficulty in telling whether a sound has changed in quality to indicate that it is behind you or to indicate something else, or indeed that it has moved behind another object ahead of you. It is therefore not likely to be an ability that can be relied on for providing clear information unless the sounds are long lasting and you can turn your head.

If I turn a radio on and move it about over my head, I can hear substantial differences in the sound quality as I move it over my head and down behind, then back over and down in front of my face, but if a radio was placed in a static location and turned on for a few seconds and then off again, would I be able to tell where it was? The changes are easy to hear as you move it, but we're good at hearing that kind of change. It's a bit like comparing two colours that are nearly the same - when we can compare one against the other directly it's easy to see that one's marginally brighter or redder, but show one in isolation and then the other in isolation and the ability to compare them usefully is gone. A clear radio station may sound as if it's ahead of you, while a muffled one might sound as if it's behind you, but the radio may be in the same position for both.

The example with an ambulance going under a bridge which you're standing on, (presumably with the road below aligned with the way you're facing, and I also assume that your eyes have to be shut and that you're standing right in the middle of the bridge rather than at one side of it) doesn't work too well - the Doppler effect will only tell you whether it's approaching or going away, but to tell whether it's behind you or ahead of you when it's coming towards you or going away would be very hard unless you can rotate your head a bit.

I keep meaning to write some code to do a stereo experiment to find out how many directions I can identify through it. Small movements of a sound source to the side are easier for us to pick up than differences in direction between static sound sources. If you're trying to create a "sound display" using this for horizontal separation, the number of "sound pixels" sideways across the screen may be very low, while the number that can be distinguished vertically through pitch would be relatively high: quarter tones must be close to the resolution limit (for note identification - much smaller differences can be heard when comparing two notes and trying to tune them to the same frequency, but when one is heard in isolation it's much harder), so you could have perhaps 8 octaves times 24 quarter tones, and that's about 200 "sound pixels" of vertical resolution to go with a horizontal resolution which may be as low as a dozen. Then again though, if you're using this to trace out a circle with a note that moves from side to side and up and down in pitch, the horizontal resolution would be much higher because it's easier to pick out the smaller sideways changes with a moving sound, so the trick to making this work might be to scan through the "sound image" from left to right to pick out the key features of the scene, also allowing the speed of the scan to enhance the horizontal resolution. Not every feature would be sounded on each scan - it would build up the picture for you over several scans, and it would tell you more about the area where you're eyes are pointing. Objects which are likely to be of particular interest could be traced out at any time, and not necessarily from left to right. This kind of system could also draw attention to message notification flags and important controls, but without turning that into a bombardment of repetitive information.

The best way to program all this kind of stuff would be to allow the user to modify the program to suit their own needs rather than having a programmer dictate how their computer works for them. With intelligent operating systems, this will become possible, and that will enable each user to maximise the functionality of the machine for them. It should not be the job of the programmer to fix the appearance of anything in their operating system or applications, but to make everything as flexible as possible. You will obviously want to release your software in the form that you think works best, but the user should be allowed to disagree with your choices and modify anything and everything if they wish to, and they should be able to do this by telling the machine what they want it to give them. If they can't see the label on a button, they should be able to tell the computer to make it more readable, and the computer should do so, rewriting the code of the application if necessary to make this happen. If they are relying on a "sound screen", they should be able to spell out which information they want to hear more often and which they want less often, and they should also be able to tell it to present that information in a totally different way if they wish. The best software in the future will be the software that accommodates the user the most in this way, giving them the machine they want to work with rather than imposing other people's preferences upon them, and as soon as you have software that does what they ask it to do, all of these issues of making it maximally usable for disabled people are already dealt with - they only have to ask it to work in a different way and it will comply. This will require high intelligence in the operating system, but that will soon come - ten years from now there will be an intelligence equalling human intelligence in every machine, and it will be an expert programmer too, capable of modifying the way any part of any program functions whenever it's asked to. This is something everyone should take into account when planning how their software is going to work ten years from now, because there's no point in spending those ten years designing something that will be wiped away by something that can produce the exact same functionality from scratch in a few minutes, and then modify it substantially in thousands of ways to adapt it to an individual user's needs.

_________________
Help the people of Laos by liking - https://www.facebook.com/TheSBInitiative/?ref=py_c

MSB-OS: http://www.magicschoolbook.com/computing/os-project - direct machine code programming


Top
 Profile  
 
Display posts from previous:  Sort by  
Post new topic Reply to topic  [ 294 posts ]  Go to page Previous  1 ... 13, 14, 15, 16, 17, 18, 19, 20  Next

All times are UTC - 6 hours


Who is online

Users browsing this forum: Bing [Bot], klange and 8 guests


You cannot post new topics in this forum
You cannot reply to topics in this forum
You cannot edit your posts in this forum
You cannot delete your posts in this forum
You cannot post attachments in this forum

Search for:
Jump to:  
Powered by phpBB © 2000, 2002, 2005, 2007 phpBB Group