Agreed with both of you, but I'd go further: I don't want to see (isa x 'cons) or (isa x 'sym) either, if possible. I'd rather every type be treated as equally non-fundamental. Of course, s-expression syntax special-cases those types sorta intrinsically, but I'm sure 'defgeneric could be used along with one of aw's "access the Arc compiler from Arc" patches. ^_^
It might be difficult and/or impossible though, considering that 'defgeneric needs to be defined in terms of something. So does calling, since the most obvious way to specify a custom calling behavior is to give that behavior as a function to call! XD
Like so many other opinions of mine, this is something that's going into Penknife if at all possible, even if the core currently needs a bunch of rewriting to get it to work.
To fix the "'defgeneric needs to be defined in terms of something" issue, I'm currently considering having most things be built-in rulebooks, with rulebook being a built-in type if necessary.
For the calling issue, I'm going to have the built-in call behavior try certain hardwired things first, and only move on to the customizable calling rulebook if those don't work. I intend for it to be possible to replace the interaction environment with one that uses a different call behavior, so even that hardwired-ness should be kinda seamless with the language.
For now, these things are all hand-wavy, and I'm open to better ideas. ^^
> but I'd go further: I don't want to see (isa x 'cons) or (isa x 'sym) either, if possible. I'd rather every type be treated as equally non-fundamental. Of course, s-expression syntax special-cases those types sorta intrinsically
Wow, I'm really interested in whether there's a way to have s-expressions that don't special-case conses and symbols. shader's just-in suggestion [1] makes me think there might be a way to merge conses and symbols into a single type, though. Could it be possible?
Essentially, all you need to do is extend 'ac, since compiling is almost all that happens to Arc expressions. In the short term, there's no need to worry about whether a custom type represents a function call, a literal, etc. As long as it compiles, you can start returning it from macros or reader syntaxes.
In the long term, there may be other things that would be useful to extend, like 'ac-macex, 'ac-expand-ssyntax, and 'expand=. Also, it may be easier for a custom syntax type to support 'expand= if there's a separate utility it can extend in order to have all the functionality of a function call. That way it can be 'sref'ed.
Thanks for this guide. It should come in handy for me. :)
If I start messing around with Arc's internals too hard though, I may not be able to resist trying to turn it into an interpreter [1]. I'm too attracted to the notion of first-class environments, eval and fexprs lately. (In this case, I'd be extending eval rather than ac, correct?)
Or maybe I should just stop being such a damn purist. Have to take things one step at a time anyway. ac is a logical place to start.
Thanks for this guide. It should come in handy for me. :)
Well, I hope it actually works. :-p
If I start messing around with Arc's internals too hard though, I may not be able to resist trying to turn it into an interpreter.
Yeah. I would have just turned it into Penknife. >.>
I'm too attracted to the notion of first-class environments, eval and fexprs lately. (In this case, I'd be extending eval rather than ac, correct?)
Sure, but there's no interpreting 'eval to build on in Arc (unless you repurpose the macroexpander XD ). I'd find it easiest to approach by building it from scratch--hence kernelish.arc.
Or maybe I should just stop being such a damn purist. Have to take things one step at a time anyway. ac is a logical place to start.
It's all up to whatever you can figure out how to build on, I think.
Also, I'd tell you not to write off purism so quickly, but unfortunately I only like purism in irrational way. ^_^;
It's come under my radar before [1]. I've read some of the thread you linked to and some of what's on his github [2]. I like the general idea of giving ' and , more power to control evaluation, but I'm afraid I don't grok the language very well yet. :-/
Update: To clarify my confusion, the documentation talks a lot about closures (e.g. that ' does some kind of closure-wrapping), but I thought the language was supposed to be fexpr-based. I don't understand yet what fexprs have to do with closure-wrapping, but I really should study the language more closely.
Eight's documentation is in a terrible state (in part because there are still many things about which I've yet to make up my mind), so blame me for any confusion.
Here's the gist: Fexprs, like macros, take expressions as arguments (duh). Those expressions are made up of symbols (duh). Because a fexpr is evaluated at runtime, those symbols may already be bound to values when the fexpr is called. Eight keeps track of which symbol is bound to which value at the place the expression originated (where the programmer wrote it) --- even if you cons expressions together, or chop them into pieces. This eliminates the need for (uniq), but still allows for anaphoric fexprs when symbol-leaking is desired.
When I wrote the docs on github, I called an expression plus any accompanying bindings a 'closure' (even though it wasn't a function). I also didn't know the word 'fexpr'. I've read a few dozen more old lisp papers since then, and hopefully on the next go-round my vocabulary will be much improved.
"there might be a way to merge conses and symbols into a single type"
Interesting idea. This might help a lot with implementing lisp in strongly typed languages. I suppose atoms could just be cons cells with nil in their cdr slot. The only problem is then how do you get the actual value out of an atom, and what is it?
This might help a lot with implementing lisp in strongly typed languages.
Don't most of them have option types or polymorphism of some kind? If you've got a really rigid one, at least you can represent every value the lisp as a structure with one element being the internal dynamic type (represented as an integer if necessary) and at least two child elements of the same structure type and one element of every built-in type you'll ever need to manipulate from the lisp (like numbers and sockets). Then you just do manual checks on the dynamic type to see what to do with the rest. :-p
The only problem is then how do you get the actual value out of an atom, and what is it?
I say the programmer never gets the actual value out of the atom. :-p It's just handled automatically by all the built-in functions. However, this does mean the cons cell representation is completely irrelevant to a high-level programmer.
> I suppose atoms could just be cons cells with nil in their cdr slot.
Could they be annotated conses with symbol in the car and value in the cdr (initialized to nil)? nil itself could then be a cons with the nil symbol in the car and nil in the cdr. This should achieve the cons-symbol duality for nil that's usually desired. (Follow-up question: annotate is an axiom, right?)
I don't want to see (isa x 'cons) or (isa x 'sym) either
Totally with you there.
I don't want to get too hung up on 'purity'. It's ok to use tables in the core if you need them for defgeneric or something. It's ok to have a few isas early on. iso is defined as a non-generic bootstrap version in anarki before eventually being overridden, so stuff like that seems fine to me. I just want to move past the bootstrap process as quickly as possible.
iso is defined as a non-generic bootstrap version in anarki before eventually being overridden
Sure, that's an okay way to go about it. ^_^ Since I'm doing the Penknife core library stuff from the top down right now, I'm just writing things the way I want to write them, trying to determine what axioms I need before the core library is loaded. If the high-level axioms are defined in another lower-level library, that's just fine, but I don't know why I'd bother with that when I can just put them in the Arc part of the core. :-p