Hi,
onlyonemac wrote:
Brendan wrote:
I very much doubt that someone using input device/s that one or more applications don't support will think this is more efficient.
But in the process of attempting to support those input devices (which, I gather, are a minority) you are compromising the efficiency of mainstream input devices.
No, I'm not. What makes you think that?
Do you think a mouse requires a radically different user interface to a trackball? Do you think that speech recognition requires a radically different user interface to a [url=https://en.wikipedia.org/wiki/Handwriting_recognition]handwriting recognition[//url]/OCR system? Do you think it's impossible to create a super-set of events, by simply combining the set of events that each type of input device provides? Do you think it's impossible to simplify that super-set by merging "multiple almost identical anyway" events into one event? Do you think device drivers can't emulate some/most events that were more suited to a different type of input device (e.g. use keypad to emulate mouse, use voice commands to provide up/down/left right, use a virtual keyboard to emulate key presses with mouse or touch screen, use eye-tracking to emulate mouse, etc)?
Do you think when a front end receives a "literal character A" event it cares whether the event came from a real keyboard, or a virtual keyboard, or speech synthesiser, or something like a web camera that recognises sign language? Do you think when a front end receives a "cursor up" event it cares where that event came from? How about a "find" event, or a "switch to menus" event?
onlyonemac wrote:
Brendan wrote:
Yes; I noticed you're still wrong and are still trying to pretend that sighted users can't be advanced/regular users of audio interfaces for no sane reason whatsoever (other than the "existing OSs are a worthless joke and make things like audio interfaces suck for everyone, forcing people who have a choice to avoid it" problem).
You mean to say, "I noticed that I'm still wrong and am still trying to pretend that sighted users will be advanced/regular users of audio interfaces with no justification whatsoever". Trying to pretend that sighted users will choose an audio interface over a video alternative, no matter how good the audio interface is, is an utter joke because to most sighted users a video interface will always be better than an audio interface. (Notice that I actually justified my statement.)
Sigh. Sure, if a sighted user is sitting in front of a nice large monitor they'll use that monitor. They won't install a nice large monitor into their car's windscreen so they can use it while driving. They won't glue a nice large monitor on the front of a bicycle helmet. They won't push a trolley in front of them while walking their dogs. Gardeners, construction workers, factory workers, garbage collection people, machinists, welders, shearers, painters, cleaners, butchers, bakers, candle-stick makers - all of these (and people doing many other types of work) can't and won't use video interfaces. There is no reason that all of these people (which is literally billions of people worldwide) are prevented from using audio interfaces regularly while they can't use video interfaces, either for their own purposes or as part of their job.
Well, almost no reason.
There are 2 reasons. That first reason is "people that think screen readers are good enough", and the second reason is "people think billions of people don't exist". These people are people like Microsoft and
W3C, and every other company/group/organisation who have consistently failed to care about anything other than "keyboard, mouse and video" when designing anything and then retro-fitted an afterthought later; who are responsible for creating the festering puss you've become accustomed to and are now assuming is the "best" way to do things. Note that it's not really because these people were stupid; it's just the way technology evolved.
onlyonemac wrote:
Brendan wrote:
Do you have any proof that sighted users won't regularly use audio interfaces for things like word-processing, spread sheets, etc when they can't use video
Audio interfaces can't (easily, efficiently, and effectively) convey formatting information (sure, they can read out a description of the formatting or use an audio icon, but note that I said "easily, efficiently, and effectively") so serious word processing is out of the question (note that I said "serious" word processing; casual word processing, like reviewing and editing document content, is still a valid use case).
Wrong. Quitting your job and/or abandoning your car so that you can use a video interface is not more efficient than just using audio to determine and/or change things like font styles while writing a novel or something where you don't care about font styles 99% of the time and only need the ability rarely.
Yes; there are some things (e.g. bitmap picture editors, high quality desktop publishing) where audio alone isn't going to work, and there are some things (e.g. remixing and remastering songs) where video alone can't work, and there's probably cases where neither alone can work (e.g. trying to get sounds in sync with video when making a movie) where you need both. There is nothing anyone can do about these cases.
onlyonemac wrote:
Audio interfaces are also linear, meaning that all the information flows in a single-dimensional stream, so trying to work with graphical content, like a spreadsheet, is difficult and no matter how good your audio interface is it will still be difficult because this is a limitation in the way that humans perceive sound.
Spreadsheets are 4 dimensional (imagine sheets along a W axis, columns along an X axis, rows along a Y axis, and the content of each cell on a Z axis). They are flattened to 2D for the video (typically by making the user switch between planes to avoid a W axis, and by combining the Z axis with the X axis). For an audio interface you could just have 3D controls (for W, X and Y) and use time for Z (but have the ability to move forward/backward through time by words, sentences, sub-expressions, etc).
For 3D video (e.g. stereoscopy), there's probably interesting ways to use depth for one of the dimensions (e.g. display each sheet as a transparent layer at a different focal point).
Note that nobody (that I know of) is able to read multiple sentences simultaneously; and as they read their eyes and brain turn it into a linear sequence. Even for "3D video" you'd select a cell (e.g. focus at that depth, look at the cell) and read the cell's contents as a linear sequence.
onlyonemac wrote:
Brendan wrote:
Yes; I'm trying to avoid limiting the ways that computers can be used; partly because there are a lot of people that can benefit now, and partly because I'm trying to defend against the future.
In trying to defend against the future, you are limiting the ways in which computers can be used currently.
Wrong. The ways in which computers can be used currently is limited by the past. By doing something less limited I defend against the future while also improving the present.
onlyonemac wrote:
Brendan wrote:
Note: So far (in my lifetime) I've seen computers go from expensive/rare things (only owned/used by large companies) to something homeless people have; I've seen large/heavy "portable" systems (with built in CRT) that very few people wanted transform into light/thin devices that are everywhere; I've seen "inter-networking" go from dial-up BBS to ubiquitous high speed Internet; I've seen "hard-wired phone with rotary dial" become smartphones; I've seen audio go from "beeps" to high quality surround sound; I've seen the introduction of things like emojis and gestures; I've seen multiple technologies (touch screens/touch pads, VR and augmented reality, motion tracking in multiple forms) go from "area of research" all the way to commercial products. If you think that what we have today will always remain the same, then... The only thing that is certain is that the way people use computers will continue to change.
And what do all of these have in common? They developed in response to the users themselves. This is a consumer-driven industry. Users wanted to be able to take their computers with them, so the companies developed laptops. Users started using punctuation characters to represent expression in text-based communication, so developers started adding emoji to mobile keyboards and communication apps. Users wanted to take stable video with their smartphones, so researchers integrated motion tracking into smartphone camera apps. Every successful development in the computing industry has been a direct result of something that the users wanted.
Every successful development in the computing industry has been a direct result of technology, then economics and marketing. When technology makes something possible, it ends up being a contest between economics (price) and marketing (convincing people they want to pay that price). Users have no idea what they want until they see it (until marketing shoves it in their face and tells them to want it).
onlyonemac wrote:
On the other hand, we've got idiots like you who try to push development forward against the way that users want it to go. That's what Microsoft have done. They changed the menubar interface to a "ribbon" and now everyone I've ever met gripes at how inefficient this interface is yet Microsoft refuse to change it back. They made a desktop operating system designed for a touchscreen, hoping that users would get touchscreens on their desktops, but users found that desktop touchscreens are uncomfortable to use and complained about the interface change until Microsoft (sort of) changed it back. Then they made an operating system that integrates everything with a Microsoft account, hoping that users would adopt cloud-based products, and users worry about their privacy and make sure to disable every cloud integration feature in Windows. These developments failed, because they're not responding to what users want but rather expecting that users will want a particular thing then forcing that thing on them until either they begrudgingly give in or the developers lose money and have to change it back.
For the ribbon interface; I agree - it was a clear mistake. For the rest, I don't think it was a mistake at all - I think it was a calculated risk. Essentially, I think Microsoft knew all along that desktop users wouldn't like it, but wanted to force application developers to support their smartphones and wanted to force users to adopt their cloud solutions and wanted to force people into using subscription based schemes; and simply didn't care that desktop users wouldn't like it.
onlyonemac wrote:
If you want to future-proof your OS, don't try to support every hypothetical input device, output device, and interface; make it extendible and expandable so that you can respond to changes in the industry and keep users satisfied without compromising on the things that they currently use every day. And application developers will follow suit: they won't need to implement support for thousands of different input devices; they will only need to implement support for those that are available at the time of development, and if a new technology comes out and they want to support it then they can add support then. It doesn't matter what you're expecting will happen; it's what actually happens that matters, and preparing your OS to respond to those developments is the most sensible thing that you could do right now.
This is so retarded that it doesn't justify a response. Did you think about what you're saying when you said it?
Cheers,
Brendan