Cool ^^ The number is jarring especially in a mathematical context but ultimately necessary, since we cannot detect the arity of functions.
Arc has absolutely no axioms for introspecting into function objects - it can only detect if the object is a function, and nothing else. I hope to rectify this in the Arc dialect I'll build for SNAP - there should be a ways to decompose a function to a function code object (potentially serializable!) and the closured data, as well as determine other bits about the function.
This version still needs some massaging but it works with non-constant biases:
(mac bias args
(let bs (map car (pair args))
`(let r (rand (+ ,@bs))
(if ,@(let i 0
(rev (accum a
(each c (map cadr (pair args))
(a `(< r (+ ,@(cut bs 0 (++ i)))))
(a c)))))))))
arc> (with (a 1 b 2 c 3) (bias a 'red b 'white c 'blue))
white
Your macro has problems with variable capture and multiple evaluation (see chapters 9-10 of On Lisp). Here's a version that should work properly:
(mac bias args
(w/uniq r
(withs (ws (map car (pair args))
xs (map cadr (pair args))
us (map [uniq] ws))
`(with ,(mappend list us ws)
(let ,r (rand (+ ,@us))
(if ,@(mappend
(fn (u x) `((< (-- ,r ,u) 0) ,x))
us xs)))))))
IMO, though, the use of a macro here is a premature optimization. I think you should try to get a function working first, and then wrap a macro around it if you know that's what you need. See my comment http://arclanguage.org/item?id=7760 for an example of such a wrapper macro (in CL, but the Arc is similar).
I still suggest you take a look at how I do it http://arclanguage.com/item?id=7765 , which (1) avoids multiple evaluation, and (2) avoids variable capture.
(1) is the hard part here, which is why I had to use a list.
Also: the reason it needs ints is because of the 'rand function. We could also define a rand-float function which creates a random floating point number and use that instead:
the above now works with weight expressions that return real numbers. Also as specified, only the chosen expression is executed; however, all weight expressions are executed.
(defm <base>compose ((t a int) (t b int))
(annotate 'composed-int
(cons a b)))
(defm <base>compose ((t a composed-int) (t b int))
(annotate 'composed-int
(cons a b)))
Then we can redefine the call* table as an overloading of the <base>call function:
(defm <base>call ((t l cons) (t v composed-int))
(let (first second) (rep v)
((l first) second)))
> 2. Allow the reader to treat a parenthesized item as an item that supports intrasymbol syntax.
This is probably going to conflict with (a . d) format
Currently macros are part of the desugar phase - macroexpansion and intrasymbol syntax expansion are done in the same step. This allows a macro to return symbol syntax, which might be expanded into a macro call too.
In particular, you should notice that the syntax there involves some intrasymbol syntax, which it expects to have visible.
Specifically, if you make intrasymbol syntax part of the reader, you get the following potential problem:
intrasymbol syntax will have to be completely regular across the entire Arc environment. You can't do tricks like I did in 'w/html, where div.bar doesn't mean (div bar) but rather means "the <div> element with the bar id". Yes, you could probably modify w/html so that it can understand (div bar): but what if the programmer wants to, say, redefine the intrasymbol syntax for foo#bar to mean, say (en-number foo bar), then suddenly w/html will break. And what if programmer B wants to, say, redefine foo#bar to mean (number-en foo bar)? How will anything that uses a slightly different intrasymbol syntax work with that?
Programmer A decides he or she wants to specially treat #\@. Programmer B decides he or she doesn't. Now, load Programmer B's code into Programmer A's environment. Oh, and Programmer B has been writing a lot of functions with "@" in their names.
If you're not going to allow #\@ to be specially treated, why should you specially treat #\., #\!, #\~ or #\: ?
#\' and friends, after all, aren't intrasymbol syntax. In fact, #\. is treated differently from within the context of a symbol from within the context of a list.
This is where "code is spec" fails, bad. Me, I say, someone has to choose one or the other, define it as "this is spec!", and everyone follows it. Your move, PG?
If the reader can be configured (e.g. by specifying wich read table to use) then two modules that uses different reading conventions can coexist by simply using their own configuration.
Now programmer C wants to use both programmer A's module and programmer B's module. Which readtable does he use so that he can freely intermix macros from A with macros from B, which have different expectations on the reader?
Reader hacking is nice, but I don't see it often in CL libraries (note: counterexamples are welcomed; it's not like I've made an exhaustive search for them). Any reader hack must make the cut of being a good, generic enough meaning that it will always be used by everyone; take for example the Arc-type [ ... _ ... ] syntax
CLSQL modifies the read table to let you write embedded SQL queries such as [select "A" [where [= ...]]] and similar (I've never studied the exact syntax, but this should give you the idea). The special reader in CLSQL can be activated/disactived through function calls that modifies the default reader.
It looks like CLSQL needs reader macros to switch the syntax on and off locally. If Arc had reader macros, then you could do this:
#.(with-A (mac macro-A ..blah..blah..in special A syntax))
Assuming 'with-A is a function that set the read table locally, and macro-A uses quasi-quote to generate its result, this will produce a macro that produces standard Arc syntax, even though it's written in A syntax.
With reader macros, 'w/html could be implemented even if de-sugaring were moved to the reader, although you'd have to call it with #. all the time.
It makes sense to me that macros should always expand to vanilla Arc syntax (or maybe even pure s-exps without any ssyntax) so that they are portable across environments.
Depends on how the module system is constructed. If modules are first class objects and not a set of symbols like CL packages are, then the module name itself may be shadowed by a local, i.e. 'cplx itself could fall victim.
> Perhaps we could have a type of macro which adds these module-specifying prefixes to your function calls when the macro is expanded