OSDev.org

The Place to Start for Operating System Developers
It is currently Fri Jan 19, 2018 7:46 am

All times are UTC - 6 hours




Post new topic Reply to topic  [ 50 posts ]  Go to page Previous  1, 2, 3, 4  Next
Author Message
 Post subject: Re: [language design] My working notes on Thelema
PostPosted: Tue May 24, 2016 6:18 am 
Offline
Member
Member

Joined: Wed Jun 03, 2015 5:03 am
Posts: 397
Schol-R-LEA wrote:
To the best of my knowledge, the OS developer has only limited options for optimizing the L1, L2 and (when present) L3 caches on the x86 chips, mostly regarding flushing a cache or disabling a cache entirely, but the cache hardware itself will be more effective at controlling it's own mapping of segments of memory to cache cells when it can limit the set of memory it needs to map. Page maps, as I understand it, provide some of that information.

OK, let it be the #1 pro on behalf of paging.
Schol-R-LEA wrote:
Furthermore, using unpaged memory is not simpler from the perspective of the operating system. Most 'flat' memory modes are really only flat from the perspective of a userland application; any multitasking OS still needs to parse the memory up into multiple process spaces. The paging hardware makes this easier, by allowing non-contiguous memory regions to be mapped to a contiguous virtual space (important when memory is being allocated at runtime), among other things.

In case of paging overall algorithm, that you employ to manage memory, is split between hardware and software parts. In case of "no paging" there's the same algorithm, but without any separated part. So, the complexity is still the same.

But there's a small part of the algorithm that is implemented in hardware and can be reused by system developer. As a time saver it's just nothing in comparison with the decrease of flexibility you will have, so it's not viable to consider this part at all. Another difference here is the speed of execution of the hardware part of the algorithm. It should be good. But the size of the part tells us that the overall gain will be just a tiny fraction of all efforts required for memory management. However, let mark this advantage as the #2 item in the pro list for paging.
Schol-R-LEA wrote:
It also reduces the computational burden for the OS itself, again by automatically restricting the space of memory that the system needs to be concerned with at a given moment.

I don't understand how it reduces the computational burden.
Schol-R-LEA wrote:
nor am I convinced that simplicity for the operating system developers is really a wise goal. Providing simplicity for the users and the application developers often requires considerable complexity on the part of the OS implementation.

Simplicity for users won't be given for free. If OS is too complex then users will have a piece of trash instead of something manageable. So, we just must pay attention to the complexity of the OS. And generally it is achieved by creating extensible systems. The core should be very simple and then there will be no much problems about extending it.
Schol-R-LEA wrote:
In any case, it is my (admittedly imperfect) understanding that you cannot enter x86-64 long mode without enabling paging

OK, let mark it as a required quirk for the x86 architecture.
Schol-R-LEA wrote:
I gather that you intend to write a single high-level garbage collector that would be used unchanged on all hardware, correct?

My goal is about standard top-down approach. I want to have simple and manageable architecture while all extensions (including hardware related optimizations) will be implemented as manageable parts with simple interaction protocol.
Schol-R-LEA wrote:
it is copying because that is how copying collectors work - 'garbage collection' is something of a misnomer here, as it is the live data that is actually collected by a copying collector, with the memory containing any remaining dead data then being freed in bulk. The algorithm works by copying the live data to a reserved area of memory in order to both eliminate garbage and compact the heap; when the copying is done, the reserved area is then marked as the active area, and the previous active area is marked as freed. My idea is simply to use paging to trigger the garbage collector, and using pages to partition the memory for faster collection cycles.

Let's compare two algorithms - with paging and without it. It's easy to understand how the "no paging" algorithm works, but how paging helps to trigger garbage collector is not clear to me. The basic idea is about to find a way of detecting there's no more free bytes in a page, so, how paging can help the idea? Or there can be another condition that triggers the "copy to new page" procedure, but again - how paging is related to the condition detection?
Schol-R-LEA wrote:
The same applies to using page misses to trigger collection - the memory manager has to track when memory is being exhausted within a given arena anyway, and the pager happens to be a convenient mechanism for detecting that.

Memory manager looks for information about free and used parts of memory. If you use paging then how it can help to determine if the page is free or full or partially used? Or you just move required data structures to the paging caches? And lose a lot of flexibility while being bound to the structures than in no way are optimized for garbage collection?
Schol-R-LEA wrote:
You can, of course, just not as efficiently or securely (as a hardware mechanism, the paging can, at least in principle, both run faster than the software equivalents, and block uncontrolled access to pages from outside of the processes those pages are assigned to when running userland processes - and unlike segmentation, paging is fine-grained enough and automatic enough to be practical for my intended design).

Uncontrolled access in managed environment is something unexpected. It can be a bug (as it was mentioned before), but it can't be a result of a normal operation. So, it's just about debugging. And for debugging there's the idea of the debug session with many additional hardware features involved (like debug registers or even virtualization). That's why I see no point in the security related paging stuff.


And now we can look at the pros of the paging:

1. Probably there are some memory access optimizations available when paging is used.
2. A bit of speed increase is possible because of execution of a small part of overall algorithm on the hardware level.

However, the cons include such a beast as the big flexibility decrease.

So, we can trade a few cycle gains for the too constrained architecture that makes the system too complex. What's the best? I vote for the flexibility.

_________________
My previous account (embryo) was accidentally deleted, so I have no chance but to use something new. But may be it was a good lesson about software reliability :)


Top
 Profile  
 
 Post subject: Re: [language design] My working notes on Thelema
PostPosted: Tue May 24, 2016 8:28 am 
Offline
Member
Member
User avatar

Joined: Fri Oct 27, 2006 9:42 am
Posts: 1057
Location: Athens, GA, USA
embryo2 wrote:
Schol-R-LEA wrote:
Furthermore, using unpaged memory is not simpler from the perspective of the operating system. Most 'flat' memory modes are really only flat from the perspective of a userland application; any multitasking OS still needs to parse the memory up into multiple process spaces. The paging hardware makes this easier, by allowing non-contiguous memory regions to be mapped to a contiguous virtual space (important when memory is being allocated at runtime), among other things.

In case of paging overall algorithm, that you employ to manage memory, is split between hardware and software parts. In case of "no paging" there's the same algorithm, but without any separated part. So, the complexity is still the same.


I am at a loss to understand what you are saying here. Could you please expand on this point?

embryo2 wrote:
But there's a small part of the algorithm that is implemented in hardware and can be reused by system developer. As a time saver it's just nothing in comparison with the decrease of flexibility you will have, so it's not viable to consider this part at all.


I'm not following you here, either. Where is the loss of flexibility in this?

Just to be clear, I am not seeing this as a matter of relying on a hardware mechanism, but rather using the available hardware to supplement the the default software mechanism. The specification for memory management is that it provide enough memory for each process or raise an error when it cannot, and release memory that is no longer in use at the earliest point it can. So long as it accomplishes this goal, the details of the implementation should be of minimal importance, so how it is implemented on a given hardware platform should reflect what is most efficient for that system. Is paging going to efficient on a given platform? Without profiling some test code and applying cost-benefit analysis, there's no way to know.

embryo2 wrote:
Schol-R-LEA wrote:
It also reduces the computational burden for the OS itself, again by automatically restricting the space of memory that the system needs to be concerned with at a given moment.

I don't understand how it reduces the computational burden.


Perhaps I am wrong in this, but my thought is that it reduces the amount of memory it has to manage at a given point - the memory manager need only concern itself with the memory used by the current process. It can use a smaller, more localized representation of the data map, and has a smaller search space to work in when trying to determine things such as which memory is free for re-allocation.

While these are not advantages specific to paging, the paging system is useful for implementing them.

embryo2 wrote:
Schol-R-LEA wrote:
nor am I convinced that simplicity for the operating system developers is really a wise goal. Providing simplicity for the users and the application developers often requires considerable complexity on the part of the OS implementation.

Simplicity for users won't be given for free. If OS is too complex then users will have a piece of trash instead of something manageable. So, we just must pay attention to the complexity of the OS. And generally it is achieved by creating extensible systems. ]The core should be very simple and then there will be no much problems about extending it.


I am pretty sure this is exactly what I have been saying all along... except that you seem to think I am saying that paging should always be part of the OS. I've never said anything about the core of the OS at all, as that core - or rather, the reference implementation - should have nothing specific to any hardware mechanism. Those are factors for when you are adapting the design to a specific platform, and which works best will depend on the platform - or even the specific machine being used, hence my intention to use Massalin's adaptive code generation techniques.

embryo2 wrote:
Schol-R-LEA wrote:
I gather that you intend to write a single high-level garbage collector that would be used unchanged on all hardware, correct?

My goal is about standard top-down approach. I want to have simple and manageable architecture while all extensions (including hardware related optimizations) will be implemented as manageable parts with simple interaction protocol.

This statement seems a bit vacuous, or at least vague. I can see what you are saying, I think, but it just boils down to you applying basic software design principles. I would be ashamed to admit that I was not applying these principles to the design of a sizable project, at least not without damn good reason.

Furthermore, without details, these statements are meaningless, as terms such as 'manageable parts' or 'interaction protocol' can mean almost anything. What qualifies as an interaction protocol? That could be anything from a specific JVM bytecode for triggering a system service to a trap or soft interrupt to a simple function call using the standard Intel C calling convention to a system call that triggers loading a particular software module. It is a meaningless statement unless we can grasp what kind of protocol we're talking about.

Finally, people of comparable intelligence can still disagree as to what qualifies as a "simple and manageable architecture' and a 'simple interaction protocol', especially if we're mixing up the users' perspectives with the system developers'. Are UNIX pipes simple? Are MS-DOS style interrupt-vectored system calls simple? Is message-passing method invocation (as used in Smalltalk) simple? I can make arguments both for and against in each of those cases, both from the user or client-programmers perspective and from the OS devs perspective.


embryo2 wrote:
Schol-R-LEA wrote:
My idea is simply to use paging to trigger the garbage collector, and using pages to partition the memory for faster collection cycles.

Let's compare two algorithms - with paging and without it. It's easy to understand how the "no paging" algorithm works, but how paging helps to trigger garbage collector is not clear to me. The basic idea is about to find a way of detecting there's no more free bytes in a page, so, how paging can help the idea? Or there can be another condition that triggers the "copy to new page" procedure, but again - how paging is related to the condition detection?


OK, this may not have been my finest idea, I will admit. My thought was basically that paging is usually triggered when a page not currently in memory is accessed, which often means that one or more pages will have to swapped back out of memory to make space. My idea was when a page is to be paged out, it would force a GC cycle on that page before writing it, meaning that only the live data would be written to the swap storage - and if there is no live data, the page as a whole can simply be freed. It seemed to me like a fair means for amortizing the garbage collection cycles, as well (rather than waiting for the whole arena to get filled), though I haven't actually calculated this. It wouldn't be the only time the memory is reclaimed, of course, but it would mean it was getting triggered at points where memory is likely to be running low.

embryo2 wrote:
Uncontrolled access in managed environment is something unexpected. It can be a bug (as it was mentioned before), but it can't be a result of a normal operation.

Once again, I'm not sure I see what you are saying here... Are you claiming that an 'uncontrolled access in managed environment' can only arise out of a bug in (presumably) the 'managed environment'? That sounds excessively optimistic to me, as it assumes that there will never be any security exploits or hardware transients. If you truly believe that it is possible to write any kind of software that can prevent all unforeseen exploits, and that hardware glitches are either too rare or too unimportant to be addressed, then no rational response can be made to such an unrealistic viewpoint.

embryo2 wrote:
So, it's just about debugging. And for debugging there's the idea of the debug session with many additional hardware features involved (like debug registers or even virtualization). That's why I see no point in the security related paging stuff.


I have no idea what you mean be this. None at all. I'm not sure if I want clarification or not at this point, but since I cannot respond to this without it, well... please, explain further.

The only thing I think I can get from it is that you see security as something that can be provided in a single software layer, a claim that seems absurd to me. Security is a process, one with several layers, and requires active participation on the parts of the users, application developers, and administrators, as well as the OS developers. No amount of 'managed code' can prevent a user from running a program that he or she has rights to use in a way that compromises their data. 'Managed environments' are at most one line of defense, and not even the most important one at that - the most dangerous security risks do not come from software exploits, but from social engineering tricks that get the user to do what you want, often without writing a line of code. There is no way for any program to prevent a user from giving away information they have legitimate access to, or doing something foolish but permissible with their system if they have the rights needed to perform that action.

Perhaps I am being too 'belt and suspenders' here, but defense in depth seems a no-brainer to me. Assuming that any one mechanism cannot be compromised almost ensures that once it is compromised, the entire system will be. Your Managed Environment is the software equivalent of the Maginot Line, statically defending the wrong things in the wrong ways and in the wrong places while ignoring the more serious threats elsewhere, and at such a great cost that it undermines the ability to make a more dynamic response once it is bypassed.

While I agree that code management is a valuable set of tools, and unlike Brendan I am not convinced that such runtime checks can always be avoided (though his argument seems less to do with the runtime checks themselves than with whether they should be implicit or explicit - he defines explicit checks as control flow rather than management, which seems a strained objection to me), you seem to be placing far too much emphasis on it, and ignoring those areas where it makes sense to supplement it.

embryo2 wrote:
So, we can trade a few cycle gains for the too constrained architecture that makes the system too complex. What's the best? I vote for the flexibility.

You still haven't explained why this would be less flexible, and I am not certain why it would be. To repeat, I see this as a matter of an implementation detail, not a feature of the OS itself; whether the platform even has hardware memory management or not, and whether it is used or not, IMAO should be a matter of concern for the implementation, not the overall design.

I would like to say that were should agree to disagree and drop this matter for the time being, but I am still not sure if I have understood all your points correctly. Please feel free to elaborate on those points I have said I was confused about, but at this point I think it best if I move on to more productive issues once you have done so.

_________________
Rev. First Speaker Schol-R-LEA;2 LCF ELF JAM POEE KoR KCO PPWMTF
μή εἶναι βασιλικήν ἀτραπόν ἐπί γεωμετρίαν
Lisp programmers tend to seem very odd to outsiders, just like anyone else who has had a religious experience they can't quite explain to others.


Top
 Profile  
 
 Post subject: Re: [language design] My working notes on Thelema
PostPosted: Wed May 25, 2016 4:47 am 
Offline
Member
Member

Joined: Wed Jun 03, 2015 5:03 am
Posts: 397
Schol-R-LEA wrote:
embryo2 wrote:
In case of paging overall algorithm, that you employ to manage memory, is split between hardware and software parts. In case of "no paging" there's the same algorithm, but without any separated part. So, the complexity is still the same.


I am at a loss to understand what you are saying here. Could you please expand on this point?

For a developer to manage memory it is required to do some steps like remembering memory ranges in use and servicing memory allocation requests by marking free range as occupied. Such steps are perfectly implementable using software only aproach, but your idea is to add a few hardware tricks to it. But what can the hardware do for us? Can it change the algorithm we are going to use to manage memory? No. It just replaces some parts of the algorithm with hardwired implementation. So, the algorithm is still the same, but some pats of it are moved to the hardware level. If algorithm is the same then we can safely assume that it's complexity wasn't changed. But I'm afraid that in reality the complexity of mixing soft and hard parts can be essentially bigger than the complexity of just soft implementation. At least I can say the complexity won't be lesser than in case of pure software aproach. And it means that any argument about paging to be of any help from the complexity point of view is just a misunderstanding of the real picture.
Schol-R-LEA wrote:
embryo2 wrote:
But there's a small part of the algorithm that is implemented in hardware and can be reused by system developer. As a time saver it's just nothing in comparison with the decrease of flexibility you will have, so it's not viable to consider this part at all.


I'm not following you here, either. Where is the loss of flexibility in this?

If you use some algorithm, that is implemented in hardware, then you have no way except to agree with it's constraints. The constraints can be about data structures, execution flow, value range limitations and so on. In case of the pure software approach there's no such constraints, so it is more flexible because it is possible to implement data structures and everything else without any limits, imposed by hardware.
Schol-R-LEA wrote:
Just to be clear, I am not seeing this as a matter of relying on a hardware mechanism, but rather using the available hardware to supplement the the default software mechanism. The specification for memory management is that it provide enough memory for each process or raise an error when it cannot, and release memory that is no longer in use at the earliest point it can. So long as it accomplishes this goal, the details of the implementation should be of minimal importance, so how it is implemented on a given hardware platform should reflect what is most efficient for that system. Is paging going to efficient on a given platform? Without profiling some test code and applying cost-benefit analysis, there's no way to know.

If you have pure software implementation then you can measure the impact of a hardware optimized solution. But if you start with hardware optimized solution then you have no chance to compare it with software only approach. At least you'll spend a lot of efforts on optimization and next you'll see a nightmare of reimplementing optimized solution with some very different approach.
Schol-R-LEA wrote:
Perhaps I am wrong in this, but my thought is that it reduces the amount of memory it has to manage at a given point - the memory manager need only concern itself with the memory used by the current process. It can use a smaller, more localized representation of the data map, and has a smaller search space to work in when trying to determine things such as which memory is free for re-allocation.

In case of binary search 8 additional comparisons can give you 256 times more memory, it's a very good efforts/value ratio. Next why do you think a process can't occupy all the memory available to the system? So, process's smaller memory map just won't work in such case. And finally those 8 comparisons in fact will be done even in case of small process's memory map, just because hardware inplementation of paging also uses algorithms to search for actual memory range. So, the paging increases speed of just 8 comparisons while the actual memory management algoritm can employ 100-1000 comparisons. If we decrease number of cycles required by 8% (in best case when there's no time required for paging and the rest of algorithm is equvalent to just 100 cmparisons) then still we'll get just a tiny speed increase of the target process because memory management is not the only thing it's supposed to do. Just 100 additional comarisons of the actual process's work will give you 4% overall spead increase, but if there's more than just 100 comparisons per memory request?
Schol-R-LEA wrote:
While these are not advantages specific to paging, the paging system is useful for implementing them.

I can't see the actual usefulness. A tiny fraction of memory range search is the useful advantage?
Schol-R-LEA wrote:
embryo2 wrote:
Schol-R-LEA wrote:
I gather that you intend to write a single high-level garbage collector that would be used unchanged on all hardware, correct?

My goal is about standard top-down approach. I want to have simple and manageable architecture while all extensions (including hardware related optimizations) will be implemented as manageable parts with simple interaction protocol.

This statement seems a bit vacuous, or at least vague. I can see what you are saying, I think, but it just boils down to you applying basic software design principles.

The basic garbage colector, you were talking about, is the most important part of the design, just according to the "top-down" principle. The optimization part is not such important. But it is the goal to do the optimization when quality of higer level parts is good.
Schol-R-LEA wrote:
Furthermore, without details, these statements are meaningless, as terms such as 'manageable parts' or 'interaction protocol' can mean almost anything. What qualifies as an interaction protocol? That could be anything from a specific JVM bytecode for triggering a system service to a trap or soft interrupt to a simple function call using the standard Intel C calling convention to a system call that triggers loading a particular software module. It is a meaningless statement unless we can grasp what kind of protocol we're talking about.

It's high level description. It's purpose was to show you the approach and not to describe the details. The approach requires good high level design first. And hardware related issues will go next.
Schol-R-LEA wrote:
My thought was basically that paging is usually triggered when a page not currently in memory is accessed, which often means that one or more pages will have to swapped back out of memory to make space. My idea was when a page is to be paged out, it would force a GC cycle on that page before writing it, meaning that only the live data would be written to the swap storage - and if there is no live data, the page as a whole can simply be freed. It seemed to me like a fair means for amortizing the garbage collection cycles, as well (rather than waiting for the whole arena to get filled), though I haven't actually calculated this. It wouldn't be the only time the memory is reclaimed, of course, but it would mean it was getting triggered at points where memory is likely to be running low.

There's a balance of claiming and freeing memory. The goal is to shift it away from situations when there's no more memory available because in such case the pause required to free some memory will be much more noticeble than in case of early memory reclaimation. So, the paging here is useless again (or useful in under very rare conditions).
Schol-R-LEA wrote:
Are you claiming that an 'uncontrolled access in managed environment' can only arise out of a bug in (presumably) the 'managed environment'? That sounds excessively optimistic to me, as it assumes that there will never be any security exploits or hardware transients. If you truly believe that it is possible to write any kind of software that can prevent all unforeseen exploits, and that hardware glitches are either too rare or too unimportant to be addressed, then no rational response can be made to such an unrealistic viewpoint.

I agree that it is possible to find an exploit in a mature code. But it's still the bug. At least it is what I had meant when was writing the post. However such kind of bugs can be hard to detect and I agree that some means of hardware protection can be useful at least for kernel memory space.
Schol-R-LEA wrote:
embryo2 wrote:
So, it's just about debugging. And for debugging there's the idea of the debug session with many additional hardware features involved (like debug registers or even virtualization). That's why I see no point in the security related paging stuff.


I have no idea what you mean be this. None at all.

The idea is simple - bugs should be debugged. And we need some tools for it. And all the tools for debugging are just a mess for the end user. So, the idea is simple - just split the process in two parts, the debugging time and the production time. During debugging time it is possible to use paging, but in production it not seems as viable.
Schol-R-LEA wrote:
The only thing I think I can get from it is that you see security as something that can be provided in a single software layer, a claim that seems absurd to me. Security is a process, one with several layers, and requires active participation on the parts of the users, application developers, and administrators, as well as the OS developers. No amount of 'managed code' can prevent a user from running a program that he or she has rights to use in a way that compromises their data. 'Managed environments' are at most one line of defense, and not even the most important one at that - the most dangerous security risks do not come from software exploits, but from social engineering tricks that get the user to do what you want, often without writing a line of code. There is no way for any program to prevent a user from giving away information they have legitimate access to, or doing something foolish but permissible with their system if they have the rights needed to perform that action.

Of course, the security is complex task. But all your words above have nothing in common with the paging issue.
Schol-R-LEA wrote:
Perhaps I am being too 'belt and suspenders' here, but defense in depth seems a no-brainer to me. Assuming that any one mechanism cannot be compromised almost ensures that once it is compromised, the entire system will be. Your Managed Environment is the software equivalent of the Maginot Line, statically defending the wrong things in the wrong ways and in the wrong places while ignoring the more serious threats elsewhere, and at such a great cost that it undermines the ability to make a more dynamic response once it is bypassed.

Let's concentrate on one issue at a time. Maginot line is about everything while kernel protection is possible without paging, for example. So, it's important to separate things from each other, else it's a mess.

_________________
My previous account (embryo) was accidentally deleted, so I have no chance but to use something new. But may be it was a good lesson about software reliability :)


Top
 Profile  
 
 Post subject: Re: [language design] My working notes on Thelema
PostPosted: Wed May 25, 2016 10:00 am 
Offline
Member
Member
User avatar

Joined: Fri Oct 27, 2006 9:42 am
Posts: 1057
Location: Athens, GA, USA
I'll need to go over what you are saying - or at least what I think you are saying - for a while before I can really respond, and in any case I think we are talking at cross purposes. I intend to drop this for now and move on to other, more productive discussions, or at least focus on taking more notes about my design ideas.

_________________
Rev. First Speaker Schol-R-LEA;2 LCF ELF JAM POEE KoR KCO PPWMTF
μή εἶναι βασιλικήν ἀτραπόν ἐπί γεωμετρίαν
Lisp programmers tend to seem very odd to outsiders, just like anyone else who has had a religious experience they can't quite explain to others.


Top
 Profile  
 
 Post subject: Re: [language design] My working notes on Thelema
PostPosted: Thu May 26, 2016 4:05 am 
Offline
Member
Member

Joined: Wed Jun 03, 2015 5:03 am
Posts: 397
Schol-R-LEA wrote:
I'll need to go over what you are saying

You are welcome :)

_________________
My previous account (embryo) was accidentally deleted, so I have no chance but to use something new. But may be it was a good lesson about software reliability :)


Top
 Profile  
 
 Post subject: Re: [language design] My working notes on Thelema
PostPosted: Wed Jul 19, 2017 9:51 pm 
Offline
Member
Member
User avatar

Joined: Fri Oct 27, 2006 9:42 am
Posts: 1057
Location: Athens, GA, USA
This is a test project I am doing while exploring ideas about how I might change the Thelema design. I mean it to be a simpler language with a minimal interpreter implementation so I can play around with some new ideas, ones which are quite different from the original plan. These are mostly just some very loose idea sketches, but the general goal is to push minimalism in the core language as far as possible - perhaps even to the point of it being Destructive Testing (in the sense that I want to see how far I can take it before it becomes untenable).

I am posting these notes here (rather than just to my GDrive or the project Github repo) mostly to ask for input from others, and to make it available to others to adopt if they like in case I lose track of the plans (OK, I don't expect this to happen, but the feedback would be appreciated).

Design notes on Apophasi
-------------------------
A sacra shall be a set of source code tokens which form a program or section of a program.

The sole syntactic element in a sacra the 'symbol', which is consists of one or more non-whitespace glyphs excluding a leading colon (':').

A value is a semantic element. It may or may not have an explicit syntactic representation.

An object is a unit of data representing a value or group of values. Objects may exist either at translation time, or at runtime, but not both (though there may be both translation-time and runtime objects for a given value).

An object may have one or more a display representations (also called a view).

A channel is a means of presenting a view. Not all channels can present all views.

A symbol with a leading colon shall be treated as a keyword. Keywords exist solely as semantic structures in the sacra, and do not directly correspond to objects existing at runtime. A keyword may hold a value (or rather, reference an object) at translation time.

Glyphs can be any displayable character which the parser can read and recognize as unique.

At minimum, the lexer should recognize every standard Unicode code point as a unique character. Unicode UTF-8 is the default. A lexer for the language must also be able to work with the Unicode UTF-16 encoding and automatically transform one into the other as needed.

The lexer shall be able to build DFA recognition tables for a local scope, which should be able to recognize all known symbols within that scope independent of the parser. However, the scoping of the recognizer tables shall itself be controlled by the parser. When these tables go out of scope, the temporary tables are set aside in a cache, so that they do not necessarily need to be rebuilt later.

A recognizer table for exposed elements of a scope should be exposed for any scope which those exposed elements are imported by.

A lexeme cluster shall consist of one or more symbols. The lexer shall first separate the lexeme clusters of a sacra by whitespace, then scan the lexeme cluster for any known symbols.

When an unknown symbol is found, it shall be added to a temporary sub-table of the lexer's DFA table for the current scope. Control over whether the symbol shall be added to the scope's actual recognition table shall be controlled by the parser.

There shall be a default Top Level Scope for any given sacra. However, this TLS is not shared by other sacra.

In interactive translation, there shall be a Top Level Evaluation Display which can accept any final evaluated results there might be after processing. The default shall be to display the values in a suitable representation.

When the parser is given a known symbol at the start of a sacra or scope, it should determine what actions that symbol can perform, and what messages/arguments it can accept. It should interpret the symbol, and any subsequent symbol(s) following it,
in an order to be determined by the translator. The symbol's evaluated result is then applied to it's successor.

Any symbolic object can accept at most one successive symbol as an argument. However, these objects may themselves accept successors. Furthermore, depending on its evaluation, this successor may be a compound object.

The translator should expose the following meta-properties to a sacra and/or scope:
0) whether a symbol is already known or not.
1) the ability to bind an object value to a symbol, treating the symbol as the object's name.
2) Whether an object has any views, and whether one of them can be presented via a specific channel.
3) the ability to assign and access meta-data about an object, such as default interpretation (i.e., the object's type), the object's size, the set of messages the object can accept.
4) the ability to assign and access meta-data about a symbol, such as its sacra representation, its scope, its bound (or evaluable) object value if any, whether to evaluate it for its value,
itself (i.e., quoting), its runtime representation, etc..
5) the ability to add a symbol (or set of symbols) to the lexer table.
6) the ability to add specialized grammar to the parser.
7) the ability to process code at translation time as a was of generating new code (i.e., macros)
8) the ability to incorporate 'primitives' from the implementation language and 'foreign functions' from object-code libraries.
9) the ability to manipulate how the code accepts messages, both in terms of recognition and parsing at translation time, and in terms of ABI (e.g., order of evaluation, lazy vs. eager evaluation, argument passing conventions, etc.).
10) the ability to add specialized code generation to the translator back end.
11) the ability to define new meta-data information attached to either a symbol or a value, especially regarding recommended presentation, version control, or external documentation.

_________________
Rev. First Speaker Schol-R-LEA;2 LCF ELF JAM POEE KoR KCO PPWMTF
μή εἶναι βασιλικήν ἀτραπόν ἐπί γεωμετρίαν
Lisp programmers tend to seem very odd to outsiders, just like anyone else who has had a religious experience they can't quite explain to others.


Last edited by Schol-R-LEA on Fri Jul 21, 2017 11:06 am, edited 1 time in total.

Top
 Profile  
 
 Post subject: Re: [language design] My working notes on Thelema
PostPosted: Wed Jul 19, 2017 9:56 pm 
Offline
Member
Member
User avatar

Joined: Fri Oct 27, 2006 9:42 am
Posts: 1057
Location: Athens, GA, USA
More of the same, but these go in a slightly different direction so I wanted to separate them here.
--------------------------------

The default symbols which any symbol should be capable of accepting as messages (barring specialization modifiers such immutability) are:
0) ':defined-as' ('⁓' - note that this is a swung dash, not a tilde - or ':=') for definition. This may or may not indicate initialization, depending on the evaluation and whether it is present as an object at runtime or not.
2) ':reset!' ('←') for re-assignment.
3) ':equal' ('=') for identity equality.
4) ':equivalent' ('≡') for value equivalence.
5) ':inequal' ('≠') for non-identity.

Predefined Top Level Symbols include:
0) ':exists' ('∃'), for a symbol with a valid value. In operator form, may be kerned as a leading glyph (that is, it can be part of a single lexeme cluster with its argument following it).
1) ':identity', the universal identity function.
2) ':parameter' (a pair of leading and trailing vertical broken bars '¦'), creates a new scope nested in the current scope and creates a group of symbols which can be bound at a suitable future point to a set of symbols, values, and/or expressions. The new scope itself can take one of three successors:
a) ':procedure' (the Phoenician letter 'lamed' - for some reason, the Unicode points for this and 'dalet' don't save to the forum database correctly, but the Unicode code point is U+1090B), create an structure specifically to perform a computation at some future point. It's own successor is either a series of expressions ended with pilcrow ('¶'), or a linear sequence (e.g., a tuple, a list) of expressions, or one of the keywords ':primitive' or ':foreign' followed by the necessary information for binding the external procedure.
b) ':transformer' or 'macro' (the Phoenician letter dalet. Unicode code point U+10903) defines an operation for replacing the parameters with a generated sequence of expressions. Its successor is the same as for a procedure.
c) ':function' ('↦'), for defining a relationship between a set of values and another set of values. The relationship is a constraint, not a computation, though a computation may be needed to establish and enforce the relationship. The successor of 'function' is a mapping.

a pilcrow ('¶') can be used to disambiguate the end point of an expression.

_________________
Rev. First Speaker Schol-R-LEA;2 LCF ELF JAM POEE KoR KCO PPWMTF
μή εἶναι βασιλικήν ἀτραπόν ἐπί γεωμετρίαν
Lisp programmers tend to seem very odd to outsiders, just like anyone else who has had a religious experience they can't quite explain to others.


Top
 Profile  
 
 Post subject: Re: [language design] My working notes on Thelema
PostPosted: Thu Jul 20, 2017 5:16 am 
Offline
Member
Member

Joined: Fri Aug 19, 2016 10:28 pm
Posts: 199
Schol-R-LEA wrote:
But what if there was a way of maintaining a state record on a per-block level that the interrupt handler or scheduler can trap when the asynchronous pre-emption occurs?
There is already information exactly like that in the exception records section of the executable, but it is indexed by rip through sorted arrays that the run-time must binary search. So, the metadata approach that exceptions use today is inefficient as performance optimization. There may be a better way, but could involve other costs.

Assuming your permission to quote the GDrive doc here:
Schol-R-LEA wrote:
System-Integer - Basically, an object wrapper for $Raw-Integer. Always exact.

Integer - a 'big' integer type. Must be at least as large as a System-Integer. Always exact.
In my opinion, types with system-dependent ranges are useful only in very specific circumstances, but generally you want something that can store a specific maximum number with specific precision. So, specifying types like Java or modern C, C++ is preferable in most cases. I can see exceptions to this argument, such as indexing types. Normally you try to support arrays/containers with as many element of the user data as the architecture/platform can deal with. You want to be able to scale with the address size of the architecture. Also, I can see the need for "small" and "big" index type, because on 64-bit, many collections can realistically use 32-bit index, which also happens to be faster for some operations (depending on the platform.) But I wouldn't recommend not specifying the lower bound of the supported range for the fundamental integer types. Also, you probably have thought about it, but all of this must somehow tie into the network and storage serializability of the types.
Schol-R-LEA wrote:
Keyword Markers for fn
----------------------
:pure - enforce referential transparency; if the procedure applies a form, macro, or procedure which has non-local effects, an error is raised.
:lazy - all procedural arguments are to be treated as if the had the ":lazy" keyword annotation. Automatically sets the procedure to :pure.
:memo - memoize the results of the procedure as a two-stage dictionary of argument-list-to-values and indexed on the argument-values-to-result; when a new argument list is passed, it is evaluated, and the value is added to the first dictionary, then the values are checked against the second dictionary for the results. Automatically sets the procedure to :pure and :lazy.
:no-tail-call - disables tail call optimization up to the stage indicated by the stage argument. This is the default for #development-experimental and #development stages, but must be explicitly set for higher levels.
I get the feeling that the keywords in this group do not belong together. For example, pure is more of a function specification, whereas no-tail-call appears to be an optimization directive. It is true however that no-tail-call also means that the function can use only limited recursive depth, so it does specify its behavior to some small degree. From ":memo" I can deduce that functions are callable objects, so the question is, should it be a keyword and not a library feature. I like it, but it probably belongs with a more general mechanism for decorating functions by meta-programming (even if the same syntax serves for its application.)

embryo2 wrote:
For applications, for example, it is possible to provide mutable pages, while for kernel - immutable. Also it is possible to live without paging at all. The main question is about what the paging is required for in a modern OS? And if the OS employs runtime compilation then it is required to weight the benefits of possible solutions with or without paging. For a quick start, obviously, it's easier to do without paging at all. Next it is important to prevent applications to spoil each others memory because of bugs, but if the OS is going to rely heavily on a flexible compiler and managed code then it seems to me the space for bugs will shrunk quickly with the progress of the compiler development. Without pointer arithmetic it's mostly about automatic memory management which can spoil memory in case of buggy garbage collector, for example. So, in the end some polished compiler and good garbage collector can make the protection means (like paging) obsolete.
It is true that range checking in managed languages offers solution to some, if not all of the security issues that paging is trying to solve. Also, the checks can be elided by the compiler in some circumstances. It is conceivable that if x86 wanted to, they could also further hardware support that would bypass even normal caching and branch prediction. Even so however, without automatic memory management, such checks are useless. Managed execution guarantees that each pointer points at a valid object and than that each indexing reference inside the object is within bounds. Modern hardware checks each memory access instead, which means that it treats the address space (and its pages) as the object, and does not care if this access is logically correct. This simplifies the solution significantly.

This brings us to tracing garbage collection, which is a different story entirely. First, AFAIK there is no easy way to parallelize the graph processing without synchronization overhead and similarly it is cheap in its stop-the-world variant, but becomes expensive in its real-time form. But that is not even the primary concern. Tracing and especially generational tracing, defers finalization of the objects and therefore resource reclaimation. Reference counting as an alternative, does not have this issue, and can actually be used in combination with a tracing collector, but it causes significant overhead on SMP (due to the constant coherency traffic of atomics), and is unable to handle cycles on its own. Since kernels need immediate resource reclaimation, minimal concurrency overhead, and real-time guarantees, using a tracing collector or even over-relying on any kind of automatic memory management can be a problem. There is trade-off here, with security and development costs, especially on monolithic kernels, but the economical incentives are apparently insufficient. For user code, I basically agree that some applications can be hosted by a single process on x64 (which AFAIK is possibile today) to save tlb flushes from process switching. But not all applications, and generally speaking not kernel code.

Assuming that the kernel does not use managed code, it still does not need to use paging. That is, particularly for monolithic kernels, if all the applications were managed, memory protection would be less useful (only for things like data execution prevention, stack guards, etc). However, address space translation also deals with fragmentation of physical memory, provides memory-mapping of files, allows swapping. So, it can be a convenient feature.

What I personally would like to see someday is a single address space with process isolation inside it. Something like this. (I believe Schol-R-LEA mentioned the term in an OS chronology he posted recently in this forum.) I imagine some technical and performance issues with that, but should be not so hard to resolve. Paging deals with data visibility. While this could be used to restrict access, it is unnatural and unrelated in my opinion. The two problems should be solved separately. Additionally, the current approach forces micro-kernel designs to either perform redundant context-switching or to be asynchronous, which is not adequate for all applications. And micro-kernel designs are long overdue.


Top
 Profile  
 
 Post subject: Re: [language design] My working notes on Thelema
PostPosted: Thu Jul 20, 2017 7:01 am 
Offline
Member
Member
User avatar

Joined: Thu Oct 13, 2016 4:55 pm
Posts: 118
I like your language, specially the BNFs. Could you please give some example code? With some basic algorithms like bubble sort or the local maximum search in a set?

Also, I have read through the spec, and I lacked some parts. Since Turing we know that all computers are definite state machines operating as commands are read. Since Neumann we store commands in the state space. It's proven (was it Knuth or Wirth, I can't remember) that every algorithm can be expressed with the combination of three control structures: sequence, conditional branch, loop. The first one basically means concatenation of commands works on the same state space, meaning if one command changes the states then the next command will receive that changed state as it's input. Conditional branches allow different commands to be executed depending on a specific state. Finally, loops allow repeating commands implementing iterations. (It worth mentioning that though grouping commands into a new command is very useful and practical (creating functions or procedures), they are not required to guarantee that all algorithms can be expressed in a language. It is also proven that every recursion can be transformed into iteration).

Maybe I was unperceptive, but I haven't seen the loops in your spec, meaning it's probably correct but definitely not complete as a general-purpose language. Am I wrong?


Top
 Profile  
 
 Post subject: Re: [language design] My working notes on Thelema
PostPosted: Thu Jul 20, 2017 10:12 am 
Offline
Member
Member
User avatar

Joined: Fri Oct 27, 2006 9:42 am
Posts: 1057
Location: Athens, GA, USA
bzt wrote:
Maybe I was unperceptive, but I haven't seen the loops in your spec, meaning it's probably correct but definitely not complete as a general-purpose language. Am I wrong?


Part of the issue is that I simply haven't addressed many parts of Thelema yet. These are preliminary design notes, not a spec per se. The same applies to the more recent post about Apophasi.

However, the real reason has to do with the design assumptions, and specifically how conventional languages view flow control versus how Lisp (and most FP languages, as well - most Lisps aren't Pure-Functional languages per se, but they are well suited to FP practices, and most of the early work on what would become the functional paradigm was done in MacLisp).

Basically, most Lisps favor the use of recursion for repetition, rather than explicit iteration. There is no specific iteration syntax in most Lisps, and many FP languages such as Haskell follow suit. For Lisps, it is all done as 'special forms' - that is, macros, or built-in syntax that is still in the form of an s-expression. For example, a simple version of the factorial function in Scheme might be:

Code:
   (define (fact n)
     (cond ((not (integer? n)) 'Invalid-value)
           ((> 0 n) (fact (abs n)))
           ((= 0 n) 1)
           (else (* n (fact (- n 1))))))


You could use (labels) (Common Lisp) or 'named (let)' (Scheme) to recurse within a block without having to define a full function

Code:
   (define (fact-named x)
     (cond ((not (integer? x)) 'invalid-argument)
           ((> 0 x)(fact-named (abs x)))
           (else
            (let loop ((i 1)
                       (j 1))
              (if (>= i x)
               (* i j)
               (loop (+ 1 i) (* i j)))))))


While most Lisps have some kind of iteration form - (do) in Scheme, (loop) in Common Lisp and Clojure (which also has (doseq) for iterating collections) - in most instances the assumption is that the compiler or interpreter will convert tail recursion to iteration, so an explicit iterator isn't actually necessary for anything that can be done by tail recursion.

In fact, in Scheme the (do) is defined as a library form, and it is often implemented as a macro that generates a recursive lambda form. The 'iterative' form the factorial above would be:

Code:
   (define (fact-do x)
       (cond ((not (integer? x)) 'invalid-argument)
             ((> 0 x) (fact-do (abs x)))
             (else
              (do ((i 1 (+ 1 i))
                   (j 1 (* j i)))
                  ((> i x) j)))))


Common Lisp's (loop) is used much more often than the (do) form in Scheme is, and a lot more baroque, but it is still mostly an s-expression form.

Clojure is a bit of an odd man out here, as the JVM - which it primarily targets, though .NET and JavaScript targeted versions exist - didn't support tail recursion elimination for a long time. Even so, there's a definite preference for recursion.

Furthermore, many things which would be done with local loops (using a structured statement such as for()) in most procedural or OOP languages are done using higher-order procedures such as (map)/(mapcar), (reduce)/(fold), and (filter), meaning that they aren't explicit repetitions at all. Again, all of these may be either higher order functions, or macros generating lambdas.

It is also quite common to define new, specialized loop constructs as needed using a macro.

_________________
Rev. First Speaker Schol-R-LEA;2 LCF ELF JAM POEE KoR KCO PPWMTF
μή εἶναι βασιλικήν ἀτραπόν ἐπί γεωμετρίαν
Lisp programmers tend to seem very odd to outsiders, just like anyone else who has had a religious experience they can't quite explain to others.


Last edited by Schol-R-LEA on Fri Jul 21, 2017 11:16 am, edited 4 times in total.

Top
 Profile  
 
 Post subject: Re: [language design] My working notes on Thelema
PostPosted: Thu Jul 20, 2017 11:30 am 
Offline
Member
Member
User avatar

Joined: Thu Oct 13, 2016 4:55 pm
Posts: 118
Looks really great! I like it! If you write an ANSI C compiler for it, I promise I'll port to OS/Z! :-) Currently I was planning to write a LOGO interpreter (my older boy is at that age to start influence him with programming :-) Still there's a long way to go until I can have a compiler, right now I'm struggling with VFS as I want to implement some non-usual features, unknown to unices.
I know how it's feel like when you have more in your head than you are able to write down :-) But it seems to me you have the whole picture (in your head at least), which is really really good! Keep up with your language!


Top
 Profile  
 
 Post subject: Re: [language design] My working notes on Thelema
PostPosted: Thu Jul 20, 2017 1:20 pm 
Offline
Member
Member
User avatar

Joined: Fri Oct 27, 2006 9:42 am
Posts: 1057
Location: Athens, GA, USA
bzt wrote:
Looks really great! I like it! If you write an ANSI C compiler for it, I promise I'll port to OS/Z! :-)

While I am intending to write a bootstrap compiler in Clojure (I decided that it was closer to the target design than Scheme) and then write a self-hosted version for the long-term system, I do mean to have a back-end that targets C as a way to speed up porting. While the main goal is to have system-specific assembly language or native machine code back-ends, the C targeted back-end (and others targeting JVM and .Net, possibly) are definitely sensible alternatives.

bzt wrote:
Currently I was planning to write a LOGO interpreter (my older boy is at that age to start influence him with programming :-) Still there's a long way to go until I can have a compiler, right now I'm struggling with VFS as I want to implement some non-usual features, unknown to unices.
I know how it's feel like when you have more in your head than you are able to write down :-) But it seems to me you have the whole picture (in your head at least), which is really really good! Keep up with your language!


When you are ready to work on the Logo interpreter, you might want to look at the Scheme from Scratch series of blog posts. Logo is very close to Lisp in many ways, and that series might help you work out ways to implement it. Mind you, there are several existing Logo implementations such as UCB Logo and StarLogo, but I can understand wanting to write your own.

_________________
Rev. First Speaker Schol-R-LEA;2 LCF ELF JAM POEE KoR KCO PPWMTF
μή εἶναι βασιλικήν ἀτραπόν ἐπί γεωμετρίαν
Lisp programmers tend to seem very odd to outsiders, just like anyone else who has had a religious experience they can't quite explain to others.


Top
 Profile  
 
 Post subject: Re: [language design] My working notes on Thelema
PostPosted: Thu Jul 20, 2017 5:09 pm 
Offline
Member
Member
User avatar

Joined: Thu Oct 13, 2016 4:55 pm
Posts: 118
Thank you very much for the link! And for the logo interpreters as well! But I know how to write one :-) I've written my first more than 20 years ago. There's a funny story about it. At school we had to write logo program as a homework. We supposed to use Comenius logo. At the time Comlogo did not support recursion, but I wanted to create a Koch fractal really badly. So I wrote a logo interpreter on my own (in QBasic, oh my god it was so so long ago, when windows 3 was brand new), and I handled in my homework along with my own interpreter... What to say, my teacher was shocked :lol: He suggested me at once to learn to be a programmer mathematican.


Top
 Profile  
 
 Post subject: Re: [language design] My working notes on Thelema
PostPosted: Mon Nov 27, 2017 1:42 pm 
Offline
Member
Member
User avatar

Joined: Fri Oct 27, 2006 9:42 am
Posts: 1057
Location: Athens, GA, USA
Just a quick note: it occurs to me that some of the confusion around Lisp compilers and interpreters vis a vis other types of compilers could be cleared up a bit by explaining that a tagged datum is, in effect, equivalent to a lexical token.

Now, since all of the syntactic elements of a homoiconic language are either tagged data literals, symbols internally represented as tagged data, or delimiters defining a literal for a tagged data form (such as the parentheses around a list in Lisp, or the double quotes around a string literal, or the various ways of representing individual characters), this means that the (read) primitive most Lisps provide is, in fact, a full lexical analyzer for the language.

Furthermore, since the program code is in the form of a list literal, the result of tokenizing the REPL or compiler input is an abstract syntax tree of the code. So, lexical analysis of a Lisp program is also parsing of the code.

While most simple implementations use an ad-hoc approach to lexical analysis, since the language is so simple, some more advanced implementations do apply conventional DFSA techniques - but the resulting lederhosen and parser is then exposed as a standard library function.

This is the main reason why discussions of Lisp translators, especially meta-circular ones, generally don't discuss lexical analysis or parsing.

So why am I bringing this up here? Because I intend to (eventually) test the idea of using adaptive state automata in the lexical analyzers for my languages, which would generate new token types for each new symbol or syntactic structure defined by the programs, and use that for handling things like read macros. If it works - and there is no guarantee that it will - it should perform token recognition even faster than a conventional lexer despite the extra overhead. And as with other Lisps and Lisp-like languages, it would expose this ability to the client-programmer - but my intent is to go even further than existing ones.

It is an ambitious, and probably risky, goal. I have no idea if it will work, nor if doing so would have any real benefit. That is why it is experimental.

_________________
Rev. First Speaker Schol-R-LEA;2 LCF ELF JAM POEE KoR KCO PPWMTF
μή εἶναι βασιλικήν ἀτραπόν ἐπί γεωμετρίαν
Lisp programmers tend to seem very odd to outsiders, just like anyone else who has had a religious experience they can't quite explain to others.


Top
 Profile  
 
 Post subject: Re: [language design] My working notes on Thelema
PostPosted: Tue Nov 28, 2017 8:41 am 
Offline
Member
Member
User avatar

Joined: Fri Oct 27, 2006 9:42 am
Posts: 1057
Location: Athens, GA, USA
OK, so Solar's rant about the importance of const-correctness made me realize that, once again, I was keeping things in my head that I really needed to get on to (virtual) paper, and ask for opinions on.

This one is a bit out in left field, to say the least. Strap yourselves in, this is going to be a rough ride, as it intersects language design, compiler design, and some rather unconventional ideas about document systems and hypertext (though ironically enough, it is rooted in the foundational concept of hypertext, as I will explain).

As should be clear to anyone reading the earlier posts of this thread, I am taking a lot cues for my language concepts from Scheme, and to a lesser extent Common Lisp and Clojure, though I am adding influences from non-Lisp languages as well - a long list of the, in fact, which includes Forth, Smalltalk, Haskell, APL, Erlang, Python, Ada, and Self (a Smalltalk derivative that eschews classes in favor of prototypes, and has several interesting ideas about both member slots and implementation techniques).

It's a pretty wildly divergent group, to be sure, and trying to draw from such differing perspectives is, to say the least, confusing.

But the main inspirations are in the Lisp family, even if the language I am moving toward right now probably won't actually be one - that has yet to be decided, hence the Apophasi experiment.

Right now, that experiment is on hold pending two preparatory projects, consisting of a comprehensive walkthrough of the interpreter and compiler examples given in the book Lisp In Small Pieces (which I strongly recommend to anyone interested in interpreters and compilers, even if you have little interest in Lisp - it covers several variations on the basic design of a Scheme translator, as well as several language variants), followed by a preliminary exploration of how I intend to write my own code translation engine (note this term). The results of those will influence where I go with Apophasi, and that in turn may change the final design plans for Thelema (so many names, too much bikeshedding... sigh).

Anyway, let me get to the first thing I need to get written down, which is that in the longer term, I don't intend to use conventional source files and source management techniques. I am going to be using them for now, because the system I would need to support my intended system is going to have to wait on the first few iterations of my language designs so I can then implement them (there are a lot of bootstrapping problems in all this), but I am already trying to find ways to jury-rig something suitable in the short term.

Please bear with me, I am getting to my point but it is going to take some explaining. Hell, the original point I meant to make will have to wait for a follow up post, because this one is already pretty long. I will post this one now to make sure it doesn't vanish into the ether, then post my explanation of the source code structure ideas, then I will finally get to my ideas about typing, type checking, procedure dispatch, and procedural arguments and returns.

_________________
Rev. First Speaker Schol-R-LEA;2 LCF ELF JAM POEE KoR KCO PPWMTF
μή εἶναι βασιλικήν ἀτραπόν ἐπί γεωμετρίαν
Lisp programmers tend to seem very odd to outsiders, just like anyone else who has had a religious experience they can't quite explain to others.


Top
 Profile  
 
Display posts from previous:  Sort by  
Post new topic Reply to topic  [ 50 posts ]  Go to page Previous  1, 2, 3, 4  Next

All times are UTC - 6 hours


Who is online

Users browsing this forum: Google Feedfetcher, z0rr0 and 4 guests


You cannot post new topics in this forum
You cannot reply to topics in this forum
You cannot edit your posts in this forum
You cannot delete your posts in this forum
You cannot post attachments in this forum

Search for:
Jump to:  
Powered by phpBB © 2000, 2002, 2005, 2007 phpBB Group