That doesn't help if you want to kill a thread which is stuck (or just taking a long time) within an (atomic ...) expression, which seems to me an important use case. Unfortunately the plt doc website appears to be down now, otherwise I'd look into solving it myself.
There shouldn't be a way to interrupt an atomic operation, because then it won't be atomic.
Your elegant solution to make kill-thread atomic is a good one, if the "killed" thread is in fact guaranteed to be terminated instantly... I'll have to look into it to see if I can find out.
There shouldn't be a way to interrupt an atomic operation, because then it won't be atomic.
An interruptable atomic is basically a transaction that can be rolled back. If it gets interrupted nothing is changed. Probably doesn't make sense to run it from anywhere but kill-thread and similar operations.
To restate my original point: reasonable semantics for atomic are that threads inside atomic can't be killed. How to kill long-running atomics is a separate issue, needing a lot more engineering.
It takes a less efficient but more general approach than type-based dispatch; I find it more useful, at least in a language like arc. An older (incompatible, unfortunately; I may fix that at some point, but I'm unconvinced that labels weren't a good feature) and somewhat modified version of it is on anarki at lib/extend.arc (http://github.com/nex3/arc/blob/master/lib/extend.arc).
Say, have you ever used labels? When I wrote my first version, I said, "ah, labels, good idea, I'll put them in". Then after using it for months I noticed that I wasn't ever using the label. So I took it out on my next iteration.
I have used them. Mostly I used them for debugging, rewriting, "code sketching" as it were. Without them, extending or reextending a function at the REPL is just annoying; if you ever make a mistake in the test function you have to reload the original function. I suppose it might make sense to have two macros - extend and extendl, maybe - one which doesn't take a label, and one which does.
Undo & reextend could easily be implemented on top of labeled extend - just store the label last used when extending a given function in a table, along with whatever information is needed to undo that extension, and have an unlabeled extension macro that uses a gensym as the label. That way we get labeled & unlabeled extend with undo & replacement.
In the Arc server, I moved creating the request object out of 'respond (http://awwx.ws/srv-misc), so that 'respond is now being passed the request object and does the usual Arc srv action to respond to a request: look to see if there is a defop etc. defined for the request path.
Now I can extend respond to implement my own ways of responding to requests: I can easily have my own way of choosing which action to take (I can have http://myserver.com/1234 display the "1234" item page, for example), or implement my own kinds of defop's.
That's from back before the transition to arc3, so it'll now be on the arc2.master branch rather than anarki master. (Obviously, if anyone would care to translate it and push it, that would be great.)
GHC (the de facto standard Haskell compiler) uses "implicit parameters" as the name of an extension that is almost, but not quite, equivalent to dynamic binding. Namely, in Common Lisp, you have the following:
Prelude> :set -XImplicitParams
Prelude> let f = (let ?x = 0 in \() -> ?x) in let ?x = 1 in f ()
0
I think that, as "dynamic variables/binding" is the name that has always been used before, and "implicit variables/parameters" is a name used to refer to something subtly different, it might be better to use the former; but it doesn't really matter that much, and "implicit" does do a better job of getting across the purpose of dynamic binding than just calling it "dynamic".
That's interesting that Haskell also uses the term "implicit". The differences seem mostly related to the type system. For example, since the variables are statically typed, it's easy for them to overload "let" to do dynamic binding with implicit parameters. Is there some other subtle difference that I'm missing?
The "it's always been called X before" argument would prevent us from ever improving the language by coming up with better names; we'd still be calling anonymous functions "lambda" instead of "fn" etc.
Er, see the post you replied to: that code I gave is precisely the same modulo syntax, but in Common Lisp it evaluates to 1 and in Haskell to 0. Essentially, Haskell's implicit variables don't allow rebinding/shadowing. It's a bid hard to explain, but look at the example and play around a bit and you'll see what I mean.
It's consistent, but it means that passing special forms to higher-order functions is essentially just a form of punnery - it lends you no more expressiveness. There is no way (unless I'm mistaken) to get (map if '(t nil) X), where X is some expression, to evaluate the first element of X (or its evaluation) but not the second. So I could as well just define a function:
(def iffn a
(iflet (c . a) a
(iflet (x . a) a
(if c x (apply iffn a))
c)))
(map iffn '(t nil) '(1 2) '(5 6))
=> (1 6)
It may not matter to you, but IMO it matters to the community (what little of it there is) that anarki and arc not break compatibility in such a simple case as this.
I very much expect the current behavior. Can't really say why, it just seems natural to me that a list as a case means "pattern-match", not "any one of these". Also it would break compatibility with pg's arc. Adding a new macro, say, 'mcase ("multi-case") or 'orcase, would be totally fine though.
That's a clever idea, I think I probably would use a macro that let me check a value against a series of expressions (perhaps using some form of currying). I don't think I'd call it "case" though :-)
I've got a macro like that, which I call 'test. The problem with combining that and 'case is distinguishing functions and constants before the cases are eval'ed.
Have you used Parsec? It's Haskell's parser-combinator library, and I noticed quite a few similarities. Haskell's typeclasses also turn out to provide a lot of general abstractions that can be applied to parser-combinators as well; for example, your 'on-result is approximately equivalent to Haskell's 'fmap, which is the function for genericised mapping. (And of course there's the fact that the parsec parser type is a monad, but I won't get into that...)
On a side note, Parsec is a little different in nature, because it distinguishes between a failure without consuming input (soft) and failure after consuming input (hard); soft failures fall back to the closest enclosing alternation, while hard failures fall back to the nearest enclosing "try" block, and then become soft failures. This means that, if you're careful, you can write with Parsec without having to worry about the exponential blowup typical of the parser-combinator approach. I'd be interested to see whether your JSON parser has any such pathological cases.
I have used Parsec. One of the most dramatic differences is that Parsec supports backtracking, and so it's a more powerful parser. Since parsing JSON doesn't need backtracking, I got to avoid both the complexity of implementing backtracking and the need to carefully avoid exponential backtracking blowups :D
No, pushing doesn't require permission; that's more or less the point of a world-writable repository. You don't really need to put info on libraries in CHANGES/, though, as is said in CHANGES/README; it's more for changes to the core language.