I'm missing something in your comment about tagged types - is rainbow doing it wrong? Arc internally uses a vector to represent tagged types afaik, but rainbow uses a custom "Tagged" class. The example you give behaves in the same way in rainbow.
> I'm surprised at the shortness of welder.
It might be cheating to use java's ui libraries - most of the hard stuff is really in there. But at least now there's a way to call all that library goodness right from arc. I'd like to abstract away everything that looks like java so that theoretically, and if we really had nothing better to do, the same arc ui library could be implemented in any other host language providing a ui.
> I'm somewhat uncomfortable with f!show as being equivalent to f.show()
I set up 'defcall so that for a java-object 'thing,
(thing 'method arg1 argn)
is equivalent to the java call
thing.method(arg1, argn);
It seemed the natural thing to do, but it's only the first version ...
> I'm missing something in your comment about tagged types - is rainbow doing it wrong? Arc internally uses a vector to represent tagged types afaik, but rainbow uses a custom "Tagged" class. The example you give behaves in the same way in rainbow.
No, I just got confused with the lack of 'rep in the call* test ^^ Sorry!
(thing!method arg1 argn) feels more natural to me, but I don't have any idea of any potential mismatch in the way java and arc works.
it's true, when you write it that way, it feels more natural. At a first glance though, this means (thing 'method) must return a method-invoker, which then gets invoked with zero or more arguments. It seems like this would add complexity - instead of calling the method, we call to get the invoker and then call it. The question, as always, is whether the increase in readability merits the increase in complexity ... we'll see what happens I suppose.
inheritable thread-locals? ouch! java has non-inheritable thread-locals out of the box, but if inheritance is really important it could be done ...
scheme semaphores appear to work much like java's wait/notify ... does anybody know java and scheme well enough to comment? Or where is a good place to find scheme docs?
> In ArcN only 'ssyntax is accessible from Arc-side
I don't understand: ssexpand is also callable from arc. Do you mean the arc-side definition of ssexpand is not necessarily the one used by the base system?
'ssexpand appears to be invoked at the same time as macro-expansion - is this correct? I'm guessing that the base system should invoke the ssexpand defined in arc in order to expand symbols correctly.
Does any anarki code depend on the implementation of 'annotate using vectors? Rainbow uses a custom "Tagged" class, and anything that assumes 'annotate returns a vector would break in this case.
Err, this is just what I defaulted it to. Normally underlying scheme doesn't default to inheritable, but I thought it might be useful.
Note that as of now there are zero applications/libs which make use of thread-locals, and there are zero applications that require inheritability. In theory, you could still safely modify Anarki's thread-locals away from that default, but then what if...
> scheme semaphores appear to work much like java's wait/notify
Being a complete JAva noob, I wouldn't actually know. However in the mzscheme docs a semaphore is a synchronized up/down counter. A wait (via 'sema-wait) on a semaphore will wait for the counter to be nonzero. A post (via 'sema-post) on the semaphore will wait for the counter to be nonzero, and then decrement it. So it's really a generalization of a mutex (which is just a flag).
I'm building a shared-nothing message-passing processes library which depends on the "counter" behavior (specifically the semaphore contains the number of messages that the process has received that it hasn't put into its own mailbox yet.)
> > In ArcN only 'ssyntax is accessible from Arc-side
> I don't understand: ssexpand is also callable from arc.
This is correct. My bad ^^
> 'ssexpand appears to be invoked at the same time as macro-expansion - is this correct?
Yes.
> I'm guessing that the base system should invoke the ssexpand defined in arc in order to expand symbols correctly.
For Anarki compatibility.
As an aside, this feature is currently unused in Anarki because of the severe slowdown in macroexpansion.
> Does any anarki code depend on the implementation of 'annotate using vectors?
Yes, lib/settable-fn.arc . However, take note that this was written before defcall, and in fact 'defcall was written in response to this. lib/settable-fn2.arc is a rewrite which uses 'defcall.
1. With tail-call optimisation, don't some stack frames just completely disappear, so by default there's no way for an interpreter to list them in a backtrace? I might have completely misunderstood TCO, of course. And if we redefined 'def so that it inserts code to build a stack trace, would this not cause the kind of memory leak that TCO is designed to eliminate, unless the stack-trace builder detects and excludes recursive calls?
2. Backtraces are even nicer with source file and line number information. Maybe I'm spoilt, having grown up in javaland. But with macro-expansion and quasiquotation and so on, how can an interpreter tell what file/line number a particular expression comes from? And as a user (an arc user, that is) trying to make sense of a back trace (obviously not a real hacker, as noted by almkglor above), is it more useful to know the source of the macro into which my erroneous expression expanded, or to know the pre-expansion source of the problem?
I haven't a clue what other lisps do for backtraces.
1. Yes. This is actually good. You don't want a 1000-iteration loop cluttering a backtrace do you?
Without TCO:
in function gs42
in function gs42
in function gs42
in function gs42
in function gs42
in function gs42
in function gs42
in function gs42
in function gs42
in function gs42
in function gs42
in function gs42
in function gs42
in function gs42
in function gs42
....985 more times....
in function foo
in function bar
from top level
With TCO:
in function gs42
in function foo
in function bar
from top level
Less is more?
That said, if you're implementing a state machine using tail-called functions, you lose the trace of the state and can only get the current state (arguably a good thing too - you don't want to have to hack through 600 state transitions, either). You'll probably have to dump the state on-screen or on-disk somehow instead.
2. Yes, this is difficult. IIRC some Schemes actually internally use an abstract syntax tree (not lists) and make macros work on that tree, not on lists. The abstract syntax tree includes the file and line number information.
The problem here is: what do you end up debugging, the macro or the code that uses the macro? If the macro-expansion is a complicated expression, then the bug might actually be in the macro, with the original code quite correct.
In arc2c the file and line number info are lost pretty early. Maybe we could use an AST direct from the file read-in, and silently fool macros into thinking that the AST is a list.
1. I've thought about this too and then looked how it worked in SBCL, and I've seen that it doesn't show tail optimized functions in the stack backtrace. I've developed a few programs with SBCL and this has never been a problem.
My understanding is that ssyntax can only be part of symbols, so you wouldn't be able to write :(a b c). At any rate, I think the idea of using it is to ape keyword parameters in Common Lisp.
Personally, I like & and | or andf and orf, but as almkgor mentioned, | is taken for "odd" symbols (e.g. '|this is a symbol|), using just / would interfere with things like w/stdout, and using & and // would be asymmetric. In any case, having some ssyntax for andf and orf is definitely a good idea.
Taking inspiration from discreet math, we could use ∧ and ∨ as andf and orf, respectively. This would be in line with pg's current use of ~ as not. On the other hand, maybe we shouldn't make use of non-ascii characters... (or else you get APL et al)
I would say use ^ and v, but something tells me that disallowing a v in identifiers would be a bad idea (e.g.eval) ;)
My biggest problem with using Unicode is that it's often a pain to type. The Mac gets this the most right of any platform I know, but even so, it's (a) not a standard, and (b) mostly alphabetic. Which is a pity, really, since those do make the most sense.
> My biggest problem with using Unicode is that it's often a pain to type.
Agreed. We might be able to add a hook to arc-mode in emacs to make it more convenient, but reminds me even more of APL (which if I recall correctly required a custom keyboard to type).
> + might not be bad, but it's probably a little too common
I like +. While + does get used in symbols occasionally (most notably in arithmetic), I can't think of any cases where it gets used in the middle of a symbol.
EDIT: On second thought, when I use plus outside of a programming context, I usually mean and, so it might be confusing to use + as 'andf.
The problem with "adding a hook to arc-mode in emacs" is that then you alienate everyone not using Emacs. And that's a good point about + being used to mean &.
"maybe we shouldn't make use of non-ascii characters" : why not ? Characters like λ have their place in a Lisp, and I think ∧and ∨ have their place too. Anyway, such characters wouldn't be used very frequently (do you often use andf & orf ?) and could be ignored if typing them is too painful. That's better than consuming ASCII characters that sometimes fit well in symbol names, IMO.
I think it would be hilariously ironic to make non-ASCII Unicode characters part of Arc's syntax and functions.
A few proposals for giving functions new names: ☢ for atomic, ✄ for cut, ✔ for check, ⌚ for time, ⌛ for sleep, ☠ for kill-thread, ☇ for zap, ♭for flat.
But anyway, I still think they're not worth loosing an ASCII character, and using mathematical notation would be very useful. It would make code readable by people a little aware of mathematics. That is, most programers. It would be definitely better than arbitrary characters.
Why should we restrict to ASCII anyway ? I mean, a lot of symbols I use are not ASCII anymore (they are accentuated, I'm French so something like 'year is translated into 'année, not into 'annee). Sure, they're hard to type, but are they any longer than they symbol counterpart ? If you type them often, just make them a vi macro (or whatever in your favorite text editor) and you're done.
It might end up looking like APL, for sure, but I think Fortress, Sun's new language designed by Guy Steele, is going that way too. And Steele cannot be wrong :)
I don't mind non-ASCII, I mind weird non-ASCII. Even in English, we leave ASCII behind: “As I was going to the café—what fun—my naïve friend said…” It's just that I don't know of any keyboard layout that supports ∧ or ∨. I agree that they would look great, as would Fortress.
I wonder if anyone's given any though to using (La)TeX in a language? So that something like (number{\wedge}acons var) would be the same as (number∨acons var)? Or just as a nice way of typing things like Fortress? (Which I agree looks very interesting.)
I'd probably use them a lot more often if the syntax was a little easier, which is why I suggested using ssyntax for them. Currently the syntax is ((orf this that the-other) foo), and doubled starting parens feel rather strange to me.
The test runner is still a bit limited - I haven't thought about how to test sockets or threads yet, for example.
It would be great to have a complete suite for arc so we can play spot-the-difference more easily when the next version comes out.
In fact, it might even be useful, for each test, to declare which implementations and versions it should pass for, so that we can run the suite and say with confidence "this conforms to Anarki 3.14, and to arc17", for example.
(no c) indeed does what I want, but I find its name counter-intuitive and not psychologically sound. Saying (if (no x)) does not sound as natural as (if (not x). I could define not by saying
(= not no)
Or:
(def (not x) (no x))
Etc. I recall searching for not in the arc sources and getting nothing because "no" was used instead. It's "not" in Perl 5, Ruby, etc. - not "no", so I assumed Graham would use it, but he didn't.
Regarding the tests - I can integrate them into my own test suite, but I only accept code that is under the MIT X11 licence, so I may not be able to copy-paste code like that. (I still don't know what licence the original Arc code is under).
Regarding your suggestion for implementation/version declaration, this seems interesting and is supported in TAP using TODO and SKIP tests, but will naturally require more logic and complexity.
Awesome, thanks for the explanation. Is this a language design choice, or is there some fundamental reason that it must be so? I ask because the ((fn () ... )) case works in rainbow - but I would like to have a set of tests that behave the same way on ac.scm as well as on rainbow - this might ultimately prove useful to other arc implementors too ...
The fundamental reason is really the problem of how to implement macros.
Most Lisp's are targetted towards compilation. So what happens is really like this:
your code
|
v
macro expander
|
v
compiler
This means that if you define this code:
(mac my-add (x y)
`(+ ,x ,y))
(def my-function (x y z)
(my-add (my-add x y) z))
Then the macro expander will expand the code to
(set my-function
(fn (x y z) (+ (+ x y) z)))
But if we really, really wanted to have macros as first class, then how would the following code get compiled?
(def my-function (x y)
(my-oper x y))
(mac my-oper (x y)
`(+ ,x ,y))
(pr (my-function x y))
(mac my-oper (x y)
`(- ,x ,y))
(pr (my-function x y)) ;exactly the same call, completely different meaning
(mac my-oper (x y)
`(* ,x ,y))
(pr (my-function x y)) ;exactly the same call, completely different meaning
If we supported macros as first-class objects, then a "compiled" program would have to compile itself while running, because the macros might have changed between invocations of the macro. In such a case, you might as well have just stuck with the interpreted version.
The problem isn't intractible (you could do JIT compilation and check if the macro inside the variable is still the same to the older macro you used), but it's not easy either. And besides, most people don't find a need to redefine macros anyway. So most Lisps just punt: the rule is, the macro exists before the Lisp reads the code.
But couldn't you decide that any macro was only evaluated once (so the my-oper in my-function wouldn't change), but macros were searched for in the lexical namespace anyway? This would mean that any call with a lexical in the functional position would have to be checked for macro-expansions at runtime, of course, but it would be slightly more reasonable.
Yes, but again: compilation during runtime. Meaning (most likely) some sort of JIT. Wanna try to implement this? You could start by hacking this onto pg's arc-to-scheme implementation.
Okay. Be careful to still be able to properly handle environments, without actually turning it into an Arc interpreter.
pg's ArcN is really an Arc-to-Scheme compiler. And I dearly suspect that this was the main reason for not implementing first-class macros. Macros are intended to work on the code before the compiler does, so having true first-class macros is a little difficult.