Brendan wrote:
embryo2 wrote:
Yes, bugs are possible even in the VM code, but the maturity of the basic component of a system is always makes it less buggy than you expect. The situation is just like with any other service software from OS and to network support, databases, file systems and so on. The service layer adds some benefits to the system despite of possible bugs.
This can probably be expressed as a formula; like:
bugs = lines_of_code * complexity_of_code * (1 - developer_skill) / (compile_time_bug_detection/3 + developer_bug_detection/3 + maturity/3)
Where all the values are between 0 and 1.0 except for lines_of_code.
Ok, let's try the mathematical approach. I agree with the definition of the enumerator, but I disagree with the denominator. Developer bug detection and maturity are closely related and intermixed, so I suppose it is better to combine them in one value.
The compile time bug detection is another issue. I suppose it can be defined as a constant, because if we to compile a large chunk of code it usually contains relatively stable parts of complex and simple code, so the ratio of complex to simple is a constant. I suppose it is universally true for any big enough chunk of code, because whatever system is taken into account there is always a lot of a dumb code that implements some simple algorithms. Even if it is something like AI we still can see a lot of sorting, iterating, comparison, accessing data structures and other simple stuff. But to be precise we can introduce a complexity factor, which describes the ratio of the complex code to the simple one. But even with all this in mind we can remember that whatever part of bugs were detected by the compiler, those bugs are eliminated immediately, so, because there's no more such bugs, it is absolutely legal to remove them from the equation.
Also we should introduce some means of defining newly created bugs that are introduced during system updates. So, the final formula would look like this:
First we can show all caught bugs:
bugs = lines_of_code * complexity_of_code * (compile_time_bug_detection_constant + developer_bug_detection_factor)
Here developer_bug_detection_factor depends on the skills the development team has while the compile_time_bug_detection_constant is always the same because of the seldom compiler updates.
Next we can show all remaining bugs:
bugs = sum_for_all_code_chunks(lines_of_code[i] * complexity_of_code[i]) - sum_for_all_code_chunks(lines_of_code[i] * complexity_of_code[i] * overall_bug_detection_by_developers_users_tests[i] * exponent_asymptotically_tending_to_1(1-code_age[i]/max_age)) = sum_for_all_code_chunks(lines_of_code[i] * complexity_of_code[i] * (1- overall_bug_detection_by_developers_users_tests[i] * exponent_asymptotically_tending_to_1(1-code_age[i]/max_age)))
Now we can see that most influential part of the equation involves code age, while all other parts tend to fluctuate near some constants. So, more mature the code is, less bugs it contains and your words about useless system components are much less applicable when time passes.
Brendan wrote:
For example, for a simple "Hello World" application you'd expect zero bugs for the application code, maybe 1 bug in a library and maybe 5 bugs in the kernel. Then you add a VM on top because some fool decided it'd help "reduce" bugs and the total bugs for "Hello World" increases by several orders of magnitude. In this case, because "Hello World" is so simple, using a VM is incredibly idiotic.
Usually the "Hello World" programs are written by beginners, it means that the simplicity of a language is very important. With the help of VM many languages become very easy to learn, so, your claim about "idiotic" use of VM is highly doubtful.
Brendan wrote:
For a medium sized application with medium complexity it's a little better; but most of the bugs are caught by compiler or unit tests or users; so using a VM (even a "mature" VM) still adds more bugs than it solves and just makes things worse.
Here again you forget about developer skills. Developers en mass have relatively limited skills and VMs help them to write reliable applications. So, your claim about "most bugs are caught" is partially implemented by the VM and without the VM the number of bugs will be at unacceptable level.
Brendan wrote:
For a VM to make sense; you need an extremely large and complicated application so that "bugs prevented by VM" is actually larger than "bugs introduced by VM". Applications like this are extremely rare.
And here you forget not only about developer skills, but also about many additional issues, the VM helps us to care of. It is security, reliability, user experience, compatibility, ability to catch bugs quicker and so on.
Brendan wrote:
You were wrong before and you are wrong now. "Economy of scale" is minor and (for massive permanent jobs) things like provider markup/profit, internet costs, latency, configuration hassles, etc surpass any benefits by a wide margin.
Well, let's consider the very simple case - your personal web server. Your approach is about holding everything at home, so all your computers will run some parts of the server. My approach is about efficiency (including financial part), so I rent a virtual server in a cloud. Now let's compare costs. Your tens of computers consume at least 1000 watts while I don't care how much watts consumes the cloud, just because may payment is fixed - 4$ per month for small load server. Now you can multiply 24*30.5 kilowatt-hours by the price of the kilowatt-hour in your location and compare the number with the mentioned 4$.
Brendan wrote:
For anything where users are involved you've got about 100 ms between the user pressing a key and the screen being updated before the system feels sluggish. For most of the internet I'm lucky to see "ping" report times that are better than 200 ms despite the fact that no processing is involved and the data is tiny. Basically; the internet is too slow to handle "doing nothing" with acceptable latency. For ethernet/LANs latency is typically better than 1 ms; so you can have distributed applications that feel like they respond to the user instantaneously despite the networking latency.
It was your words about uselessness of calling string comparison library with your messaging approach, but now you think that it is very useful to call a server on the net for every printed character. May be it worth to remember that some processing is still possible on the client side? And with this in mind we can safely assume, that even latencies on the scale of one second can be perfectly acceptable in many cases.
Brendan wrote:
Take a look at things like (e.g.) the requirements for storing people's medical records or credit card information, and tell me if it's even legal to use "cloud" for these things in the first place.
I suppose it's legal when cloud provider satisfies the legal rules. If one private enterprise is allowed to store some sensitive data, then why another enterprise shouldn't be allowed to do the same? And legal rule satisfaction involves the responsibility in the court if something goes wrong. So, what is the difference between two enterprises with sensitive data if both implement all required regulating rules?
Brendan wrote:
embryo2 wrote:
The communicating entities boil down to the ability of some entity to extract independent subtasks from a bigger one. Such parallelization efforts still deliver no viable results, so you are going to outperform the whole academic society.
The same basic "asynchronous messaging" model has been in use by virtually all very scalable systems and virtually all fault tolerant systems for about 50 years now. Just because you don't think you can design software like this doesn't mean that everyone else can't.
Ok, it has been in use, but your goal is about distributed processing, so you should define a way of efficient distribution of tasks among all available computers. Or you should sacrifice the efficiency and allow most of your computers to be idle while your smart-phone is straggling performing some calculations.
Brendan wrote:
embryo2 wrote:
Or the application just serializes it's current state and sends it to the Jane's application. No process was stopped or started, nothing was written to disk.
Jane wasn't running any application before.
To serialise the data you'd need extra code to serialise it, extra code to start the process that receives the data, extra code in the application to receive and parse the data, extra bandwidth to transfer the data, and then extra code to terminate Fred's "now unused" old process.
If Fred was editing a movie and the application was working with 50 GiB of data, then the application would be transferred to Jane instantly.
In your case all the problems from above are also in play. You need some code to serialise a data, extra code to start the process that receives the data (there was no Jane's application), extra code in the application to receive and parse the data, extra bandwidth to transfer the data, and then extra code to terminate Fred's "now unused" old process (or application).
And also you need to send the 50GiB file to the Jane's computer, or else it will be a wonder if the computer would process the video without having it's bytes available.
Brendan wrote:
You think I should forget about everything because "desktop themes" are impossible and someone might like a different colour??
You think the user's taste is limited to the colors only? You can remember holy wars among Windows, Linux and MacOS funs. The universal tool for everybody is impossible without some kind of monopoly that you have not.
Brendan wrote:
For integration with "other corporate software" you've got it backwards. That other corporate software would have to integrate with the OS (including the OS's maintenance tools) not the other way around.
Ok, but now your turn to answer how it is possible to enforce corporations to switch to your system. If they switch, then yes, they will invest in integration, but would they switch? And the switch is such a beast that enforces you to forget about some perfectionism and to implement exactly what the corporations want, despite of if you like it or not. The last enforcement, it seems to me, is not as easy acceptable by you as it should for the corporations to start looking at your product. It's about marketing and advertising, but not about highest quality. Yes, the world often looks so ugly.
There should be some visible benefit, like reduction of the number of computers because of distributed processing, but such benefits are hard to achieve, because there's no simple way of distributing the processing. And even if you would manage to create such system, somebody from Microsoft will come to your client and tell them that todays graphical interfaces are helping to improve user's performance and to reduce costs and Microsoft has all required video drivers (developed by hardware vendors) while your system has just some very limited set of drivers for old hardware. And such advertising can make a deal for Microsoft. In short - it's too expensive to compete with big boys.