metallevel wrote:
bwat wrote:
Being more pedantic than usual I would like to point out that all of the symbols we programmers use are interpreted at some level, all languages used are interpreted, all microprocessors interpret their opcodes and in some cases even the operands are interpreted.
Yeah, but microprocessors do it in hardware. Some microprocessors are microcoded too, but that is slower than running on tailor made hardware.
You are, of course, right to point out the huge difference in instruction latency. The rule of thumb is 1 to 2 orders of magnitude increase in execution latency for a program for each level of interpretation. In the system I'm currently building the source interpreter is roughly 240 times slower than the bytecode interpreter. These days I'm no longer surprised when I find bytecode interpreted code to be good enough. Here's an old paper from a Smalltalk company talking about bytecode interpreters and some of their advantages (like code density - just to widen the scope from the rather narrow latency-is-top-priority viewpoint):
http://www.object-arts.com/downloads/pa ... IsDead.PDFNable wrote:
EDIT: Looks like JIT and AOT compilation actually have become quite popular for JavaScript recently. But I still wouldn't regard such techniques as 'efficient' (except compared to simple interpretation), since time must still be taken to compile the program on the user's end.
Every time a compiled code routine is executed, the cost of compilation per compiled instruction executed is lowered and efficiency is increased. Again, it is a question of what is good enough. I worked on an AOT compiler project that was shut down because the JIT sister project turned out to be good enough even for applications that were deemed to be
time critical and therefore in need of AOT compilation.
Ultimately, you never know until:
a) you get your hands on a specification, and
b) you do some measurements.
Until that time we're just working off technical prejudice and guesswork.