Tilde, I don't mean to be rude or anything, but what exactly is the point you're trying to communicate here? It seems you're trying to argue that coding assembly will yield better code? If you are, then you're plain wrong.
~ wrote:
Many optimizations like the one pointed out previously are just algebra with different degrees of complexity to reduce constants and variables. Beyond that, the Intel/AMD manuals indicate how to optimize code. If there are different optimizations for different CPUs, we could start by looking at which ones differ between CPU models and create pseudoinstruction mnemonics that would expand to optimized sequences. Those optimizations could probably be done slightly portable or fully portable after some time.
All of that is done already today, we call it C compiler. The C language is portable and the compiler knows how to optimize, problem solved.
Essentially _all_ of the optimizations are somewhat different for each CPU, all modern x86 CPUs do tons of weird stuff that you can't reasonably learn, understand and apply to any meaningful code base. And even if you could (you can't, seriously) it would essentially prohibit most future changes/improvements because it would essentially be spaghetti code. You see, if we create proper code in C and then compile it to asm it's okay for the compiler to produce "spaghetti" asm because the C code stays sane and understandable. Each different CPU has different pipelines, different amounts of cache, possibly different cache line sizes, different number of execution units, etc..
Long are the days when Intel published how many cycles some instruction takes because no such figure exists today, some instructions take 1/3 of a cycle and if there's TLB and cache misses it can take hundreds of cycles. Are you seriously suggesting that at every point in code you're going to analyze the full impact to everything in the system and that you that without mistakes, for every assembly instruction and you'll actually produce something within the next million years?
~ wrote:
I think that I'd do good to always inspect the generated code by the compiler to learn the actual tricks to optimize code, so I can manually write increasingly more optimum code.
In practice you'll never be able to recreate what the compiler can do, for the same reason that you'll never be able to do 4 billion simple additions per second. The amount of work a compiler can do in a second is far beyond anything you can reasonably do so leaving the optimizations to the compiler is the correct thing to do.
If a compiler doesn't optimize your code as well as you can in some tiny specific place then you can hand optimize that, of course better yet, improve the optimizer so in the future it can..
~ wrote:
Human optimizations for carefully selected key algorithms could probably always end up being superior over time if we really know the optimization algorithms themselves and turn them into portable pseudoinstructions built from optimized sequences of native instructions.
You can't do that, or rather the way to do that is to recreate C (or preferably a better language)... The optimization isn't about knowing which "asm" sequences would be fastest, it's that on specific CPUs a specific instruction sequence will be faster due to XYZ and that's different for each CPU model, let alone manufacturers, not even thinking about different architectures.
~ wrote:
The optimizations could even become part of the official algorithm in the same way than a mathematical formula, so it would also be worthwhile, study the compiler and look for information sources to apply those tricks manually.
So now your algo which is 10 lines of asm turns into 50k lines of asm; 10 lines for each possible CPU. And that's one really simple algo, good luck maintaining that and doing that for every algo/function that is ever needed.. That's why we have higher level languages and compilers that optimize.
Oh, and the (compiler) optimizations will only get better and hand optimizing assembly for x86 will only get more difficult..
edit.
PS. If you want to learn asm and optimizations for fun (or to create your own compiler, etc), by all means, do it. There's nothing wrong with that, but don't try to suggest seriously that anyone in their right mind should start coding in asm, except for a few tiny things.