That's an excellent idea. Instead of Arc ssyntax and JavaScript dot notation being at odds, they should work together.
I think this starts to get into the more interesting namespace issues raised in http://arclanguage.org/item?id=11920 that I've totally dodged up to this point.
So, if it's document!body and (document 'body), then document can be an Arc table. But if it's document.body and (document body), then will it need to be a function or macro instead? Hmm... I may need to play with this for awhile.
I've got something that covers most of the cases now. I used to take a quoted Arc expression and start translating it immediately. Now I do a recursive ssexpand first:
By the time the compiler's primary function (js1) sees x.y, it has already been expanded to (x y). Then the corresponding JS function call can be printed:
arc> (js `foo.bar)
foo(bar);
arc> (js `(foo bar)) ; prints the same
foo(bar);
For ! ssyntax, document!body now expands to (document 'body) as it should. Then the correct JS is printed:
It turned out like the one fallintothis proposed. (Thank you, by the way.) Note that that the function wrapping is delibrate. [1]
But how does the compiler differentiate a function call from object access? Well, the implementation is naive: if there is only 1 arg and it's quoted, consider it object access; else it's a function call. So it no longer allows function calls to with a single quoted argument (hence, "something that covers most of the cases"). The only way I could think of to get around this is to check the type of the item in functional position, but that can't be done until runtime. Any ideas?
: ssyntax works now too! I defined compose and you can see that the equivalent expressions print the same:
arc> (js `(car:cdr (list 1 2 3)))
(function(){
var g13954=arraylist(arguments);
return apply(cdr,g13954).car;
})(list(1,2,3));
arc> (js `((compose car cdr) (list 1 2 3)))
(function(){
var g13956=arraylist(arguments);
return apply(cdr,g13956).car;
})(list(1,2,3));
You wouldn't be able to tell that they work from this, of course, since you're reading a macroexpansions with gensyms. (Plus some of the functions are defined in JavaScript.) When executed in a browser, though, they both evaluate to 2.
So I guess the problem that inspired this poll is mostly solved now! That is, unless I've neglected something critical, made a terrible mistake and need to revert. ^_^
[1] (= x y) should compile to (function(){return x=y;})() rather than x=y for the same reason (if x y z) should compile to (x?y:z) rather than if(x)y;else z;. [2] (Thanks, rocketnia.)
but there's probably little need to use quotation in JavaScript anyway.
(= x y) should compile to (function(){return x=y;})()
I don't know why you suppose (x=y) wouldn't work for that case, but at least your approach will generalize consistently to the case (= x y z w). ^_^
On another note, you may want to change your function block format to "(function(){ ... }).call(this)". That way this has the same value inside and outside the block. This is especially relevant for something as basic as assignment; if I'm planning to use a function as a constructor and I say things like (= this!x 10), I'll be frustrated if I end up setting properties on the window object. :-p
> at least your approach will generalize consistently to the case (= x y z w).
Yes, that's the reason. (= x y) was a poor example since you don't actually need the wrapping function for single assignment. In fact, I've provided a way to do it without:
arc> (js `(assign x y))
x=y;
Feels like cheating to compile assign to = (if you did it in Arc, you'd have a circular definition! ^_^), but I don't think JavaScript provides a more primitive assignment operator.
> That way this has the same value inside and outside the block.
Very astute! ^_^ I had become aware of the problem of this changing values but didn't know how to fix it. I will try your 'call approach soon. Thanks a lot!
arc> (ssexpandall ''dont-expand-me.im-quoted)
Another good point. Something else was making me question my ssexpandall formulation so this is going on my TODO. I think you're right that quotation isn't critical in JavaScript, but I do want to compile it correctly. I hope to eventually support eval:
I doubt quasiquotation will be supported though. (How would you compile that anyway, string concatenation?) I don't think JavaScript has anything quite like quasiquotation, which is probably why it doesn't have macros (which is in large part why lisp=>js compilers are attractive in the first place [1]). Additionally, there's an elegance to reserving unquote for escaping Arc code, which would be difficult to do if you compiled quasiquotation.
Edit: Hmm... that last paragraph may not be very well reasoned. Maybe string concatenation - or rather concatenating strings with unquoted expressions - is in fact a good counterpart to quasiquotation and I should compile it, even though JavaScript doesn't have macros. What do you think?
I think the way you're going to go about it, by having (quote ...) forms be compiled, has a bit of a caveat. If you're already planning to have a!b expand to (a 'b) and compile to "a.b", then won't (js '(eval 'foo)) just result in "eval.foo"?
Maybe a!b should expand to (ref a "b") or something, where (ref a b) compiles to "a[b]" for most arguments but compiles to "a.contentsOfB" when the second argument is a literal string that counts as a JavaScript identifier. (The second case could be totally left off to make things easier; document['getElementById']('foo') is still a method call, and regular property gets and sets work too.)
All that being said, I bet you already support eval(), in a way:
what would be (js '(eval '(alert (eval '(+ 1 2)))))
is expressed as (js `(eval ,(js `(alert (eval ,(js '(+ 1 2)))))))
The difference here is just syntax sugar, IMO. (Saving parentheses is a fine goal of syntax sugar, though!)
Maybe string concatenation ... is in fact a good counterpart to quasiquotation and I should compile it...
That feature would be a bit more difficult to simulate if 'js didn't support it intrinsically. Here's a quick approach:
(mac jswith (bindings . body)
`(fn-jswith (list ,@(map .1 bindings))
(fn (,(map .0 bindings)) ,@body)))
(def fn-jswith (vals body)
; We're adding the suffix "v" to each name so that it isn't the
; prefix of any other name, as might happen with gs1234 and gs12345,
; for instance. Note that it still counts as a JavaScript identifier
; with this suffix.
(withs (strnames (map [string (uniq) 'v] vals)
names (map sym strnames))
`( (fn ,names
(eval ,(multisubst (map [list (+ "('+" _ "+')") _)] strnames)
(js do.body.names))))
,@vals)))
(mac jslet (var val . body)
`(jswith (,var ,val) ,@body))
now what would be (js '(eval `(+ 1 ,foo)))
is expressed as (js `(eval ,(jslet f 'foo
(js `(+ 1 ,f)))))
where the final form sent to 'js is
(eval ((fn (gs1001v) (eval "'(1+('+gs1001v+'))'")) foo))
or expressed as (js:jslet f 'foo
(js `(eval ,(js `(+ 1 ,f)))))
where the final form sent to 'js is
((fn (gs1001v) (eval "'eval(\'(1+('+gs1001v+'))\')'")) foo)
I do feel that this difference is more than sugar, since the 'foo subexpression is moved out of context.
Also, more importantly, it has a security leak I'm not sure how to fix. The call to 'subst doesn't pay attention to the meaning of what it's replacing. If an attacker is able to get a string like "gs1001v" into a forum post or username or whatever in the server data, and then that string is embedded as a literal string in JavaScript code which is processed as the body of a 'jslet, something wacky might happen, and the attacker will be in a position to arrange things so that just the wrong wacky things happen.
If you just make a way to put identifiable "holes" in the compiled JavaScript, you'll remove the need to resort to blind string substitution here. The holes could be as simple as names surrounded by delimiters which you guarantee not to appear elsewhere in the result (even in string literals); that way a string substitution approach doesn't have to be blind. The holes could help you implement 'quasiquote, and conversely, if you implement 'quasiquote, there might not be much of a need for the holes.
Additionally, there's an elegance to reserving unquote for escaping Arc code, which would be difficult to do if you compiled quasiquotation.
Well, the Arc quasiquotes are processed before 'js even sees the input, right? Here's the only problem I see (and maybe it's exactly what you're talking about):
A typical way to escape from two levels of nested Arc quasiquotes is ",',", as in `(a `(b ,c ,',d)). That constructs something that includes a (unquote (quote ...)) form, so it only works when you're sending the result somewhere where unquote-quotes don't matter (like, to be evaluated as Arc). So ideally, the 'js meanings of 'quasiquote and 'quote should have this property. I don't think this would be especially hard to guarantee, but it might be easy to miss.
(Note that if Arc's quasiquotes didn't nest, the same example would be expressed as `(a `(b ,',c ,d)), and no unquote-quote would hang around to be a problem. I'm beginning to wonder if nesting quasiquotes are a wart of Arc.)
It appears I've reinvented a worse version of your wheel. ^_^
Your ssexpand-all is superior and I'm using it now. I did try to refactor it, thinking there must be a function f (like my ssexpandif but more sophisticated) that satisfies
(treewise cons f expr)
while producing the same functionality, but I haven't been able to determine what that would be.
arc> (ssexpand 'a:.b)
(compose a .b)
arc> (ssexpand '.b)
(get b)
So, we need to recurse in the f argument anyway. At a certain point, it seems like the anonymous & higher-order functions add layers of indirection on what should just be a straightforward recursive definition.
I get really annoyed at that, though, when working with trees in Arc. There always seems to be some underlying pattern that's just different enough that I can't abstract it into a higher-order function.
It's not a one-to-one translation, but it brought some oddities to the surface.
There's a disconnect between the single declarations (dec x foo ...) and the multi-declarations (dec x (foo bar baz) ...). E.g., with
(cmac w/msg sms () `(prn "dispatched on sms"))
(cmac w/msg voice () `(prn "dispatched on voice"))
(cmac w/msg email () `(prn "dispatched on email"))
We can do
arc> (dec x voice (w/msg x))
dispatched on voice
"dispatched on voice"
but
arc> (dec x (sms voice email) (w/msg x))
Error: "reference to undefined identifier: _x"
This is because
(dec x voice (w/msg x))
only looks at
(declare-obj-class* 'x)
to decide what to do, but
(dec x (sms voice email) (w/msg x))
treats x as a variable and dispatches on (type x) -- in my implementation, a case, in yours, an if with a bunch of isas. Using type for arbitrary "classes" won't do so well in Arc as it stands, because custom types aren't used very liberally. But, it's possible.
arc> (= x (annotate 'sms 'blah))
#(tagged sms blah)
arc> (dec x (sms voice email) (w/msg x))
dispatched on sms
"dispatched on sms"
Because of this expansion (into a case or if), I took the liberty of quasiquoting the "Undeclare[d] macro for this object" error in my rewrite.
Yours:
arc> (type x)
sms
arc> (dec x (sms nonexistant-class) (w/msg x))
Error: "Undeclare macro for this object"
arc> (dec x (foo bar baz) (w/msg x))
Error: "Undeclare macro for this object"
arc> (dec x foo (w/msg x))
Error: "Undeclare macro for this object"
arc> (dec x sms (w/msg x))
dispatched on sms
"dispatched on sms"
Mine:
arc> (type x)
sms
arc> (dec x (sms nonexistant-class) (w/msg x))
dispatched on sms
"dispatched on sms"
arc> (dec x (foo bar baz) (w/msg x))
Error: "No method w/msg for type sms"
arc> (dec x foo (w/msg x))
Error: "No method w/msg for class foo"
arc> (dec x sms (w/msg x))
dispatched on sms
"dispatched on sms"
I'm still confused that the macros take in some sort of implicit parameter which is (I think) invariably just the class being dispatched upon. Is it good for anything?
Yours:
arc> (cmac m a (x y z)
`(do1 nil
(prs "dispatching on" ',a "with args" ',x ',y ',z #\newline)))
#(tagged mac #<procedure: m>)
arc> (cmac m b (x y z)
`(do1 nil
(prs "dispatching on" ',b "with args" ',x ',y ',z #\newline)))
#(tagged mac #<procedure: gs1774>)
arc> (dec foo a (m foo bar baz quux))
dispatching on a with args bar baz quux
nil
arc> (= foo (annotate 'b 'foo))
#(tagged b foo)
arc> (dec foo (a b) (m foo bar baz quux))
dispatching on b with args bar baz quux
nil
Mine:
arc> (macmethod m a (x y z)
`(do1 nil
(prs "dispatching on" ',a "with args" ',x ',y ',z #\newline)))
#(tagged mac #<procedure: m>)
arc> (macmethod m b (x y z)
`(do1 nil
(prs "dispatching on" ',b "with args" ',x ',y ',z #\newline)))
nil
arc> (dec foo a (m foo bar baz quux))
dispatching on a with args bar baz quux
nil
arc> (= foo (annotate 'b 'foo))
#(tagged b foo)
arc> (dec foo (a b) (m foo bar baz quux))
dispatching on b with args bar baz quux
nil
The fact that the "methods" are macros also means that the multi-declaration case will try to expand every declared possibility, regardless of what type the object is.
arc> (cmac incongruent-args a () `(list ',a))
#(tagged mac #<procedure: incongruent-args>)
arc> (cmac incongruent-args b (x) `(list ',b ',x))
#(tagged mac #<procedure: gs1774>)
arc> (cmac incongruent-args c (x y) `(list ',c ',x ',y))
#(tagged mac #<procedure: gs1776>)
arc> (dec foo a (incongruent-args foo))
(a)
arc> (dec foo b (incongruent-args foo bar))
(b bar)
arc> (dec foo c (incongruent-args foo bar baz))
(c bar baz)
arc> (= foo (annotate 'c 'foo))
#(tagged c foo)
arc> (dec foo (c) (incongruent-args foo bar baz))
(c bar baz)
arc> (dec foo (a b c) (incongruent-args foo bar baz))
Error: "procedure gs1774: expects 2 arguments, given 3: b bar baz"
arc> (dec foo (c b a) (incongruent-args foo bar baz))
Error: "procedure gs1772: expects 1 argument, given 3: a bar baz"
because, in yours, it expands into:
arc> (= (declare-obj-class* 'foo) '(a b c))
(a b c)
arc> (ppr:_callmac 'incongruent-args 'foo 'bar 'baz)
(if (isa foo 'c)
(gs1776 c bar baz)
(isa foo 'b)
(gs1774 b bar baz)
(isa foo 'a)
(gs1772 a bar baz)
(err "Undeclare object"))t
and each gs... macro gets expanded anyway -- sometimes with the wrong number of arguments.
Same thing in mine:
arc> (= (declarations* 'foo) '(a b c))
(a b c)
arc> (ppr:invoke 'incongruent-args 'foo '(bar baz))
(case (type foo)
a
(gs1791 a bar baz)
b
(gs1793 b bar baz)
c
(gs1795 c bar baz)
(err (+ "No method "
'incongruent-args
" for type "
(type foo))))t
In general, I'm not sure I've ever needed "object-oriented" macros (especially in Arc), but I've never really looked for a use-case.
Much of Lisp's power stems from the fact that virtually everything can be represented as a list. If this is true, then writing 'each for lists is almost as good as (or better than) writing a generic 'each for every kind of type. A language design like Arc's that capitalizes on this idea indirectly provides incentive for making as many things into lists as possible. This is why pg has flirted with representing both strings [1] and numbers [2] as lists, and also why he promotes using assoc lists over tables when possible [3].
One disadvantage of this approach is that it can sometimes seem unnatural to represent x as a list, but it has the benefit of providing a very minimal cloud of abstractions with maximum flexibility.
A powerful object system seems like a different way of going about the same thing. That's probably why Lisp users are often unenthusiastic about objects: there's a feeling of redundancy and of pulling their language in two different directions. (Arc users can be especially unenthusiastic because they're so anal about minimizing the abstraction cloud.) That's why I'm not particularly enthusiastic about objects, anyway. They're not worse - just different, and largely unnecessary if you have lists.
I could be missing the ship here. I don't have enough experience with object systems to understand all their potential benefits over using lists for everything. (And, of course, the popularity of CLOS demonstrates that a lot of people like to have both!)
)
[1] arc.arc has this comment toward the top:
; compromises in this implementation:
...
; separate string type
; (= (cdr (cdr str)) "foo") couldn't work because no way to get str tail
; not sure this is a mistake; strings may be subtly different from
; lists of chars
The Lisp that McCarthy described in 1960, for example, didn't have numbers. Logically, you don't need to have a separate notion of numbers, because you can represent them as lists: the integer n could be represented as a list of n elements. You can do math this way. It's just unbearably inefficient.
I once thought alists were just a hack, but there are many things you can
do with them that you can't do with hash tables, including sort
them, build them up incrementally in recursive functions, have
several that share the same tail, and preserve old values.
I think you're right about it being frustrating to be pulled in multiple directions when choosing how to represent a data structure.
In Groovy, I'm pulled in one direction:
class Coord { int x, y }
...
new Coord( x: 10, y: 20 )
+ okay instantiation syntax
+ brief and readable access syntax: foo.x
As the project evolves, I can change the class definition to allow for a better toString() appearance, custom equals() behavior, more convenient instantiation, immutability, etc.
In Arc, I'm pulled in about six directions, which are difficult to refactor into each other:
'(coord 10 20)
+ brief instantiation syntax
+ brief write appearance: (coord 10 20)
+ allows (let (x y) cdr.foo ...)
- no way for different types' x fields to be accessed using the same
code without doing something like standardizing the field order
(obj type 'coord x 10 y 20)
+ brief and readable access syntax: do.foo!x (map !x foos)
+ easy to supply defaults via 'copy or 'deftem/'inst
[case _ type 'coord x 10 y 20]
+ immutability when you want it
+ brief and readable access syntax: do.foo!x (map !x foos)
- mutability much more verbose to specify and to perform
(annotate 'coord '(10 20))
+ easy to use alongside other Arc types in (case type.foo ...)
+ semantically clear write appearance: #(tagged coord (10 20))
+ allows (let (x y) rep.foo ...)
- no way for different types' x fields to be accessed using the same
code without doing something like standardizing the field order
(annotate 'coord (obj x 10 y 20))
+ easy to use alongside other Arc types in (case type.foo ...)
+ okay access syntax: rep.foo!x (map !x:rep foos)
(annotate 'coord [case _ x 10 y 20])
+ immutability when you want it
+ easy to use alongside other Arc types in (case type.foo ...)
+ okay access syntax: rep.foo!x (map !x:rep foos)
- mutability much more verbose to specify and to perform
(This doesn't take into account the '(10 20) and (obj x 10 y 20) forms, which for many of my purposes have the clear disadvantage of carrying no type information. For what it's worth, Groovy allows forms like those, too--[ 10, 20 ] and [ x: 10, y: 20 ]--so there's no contrast here.)
As the project goes on, I can write more Arc functions to achieve a certain base level of convenience for instantiation and field access, but they won't have names quite as convenient as "x". I can also define completely new writers, equality predicates, and conditional syntaxes, but I can't trust that the new utilities will be convenient to use with other programmers' datatypes.
In practice, I don't need immutability, and for some unknown reason I can't stand to use 'annotate and 'rep, so there are only two directions I really take among these. Having two to choose from is a little frustrating, but that's not quite as frustrating as the fact that both options lack utilities.
Hmm, that gives me an idea. Maybe what I miss most of all is the ability to tag a new datatype so that an existing utility can understand it. Maybe all I want after all is a simple inheritance system like the one at http://arclanguage.org/item?id=11981 and enough utilities like 'each and 'iso that are aware of it....
I rewrote the type system for arc a while ago, so that it would support inheritance and generally not get in the way, but unfortunately I haven't had the time to push it yet. If you're interested, I could try to get that up some time soon.
Well, I took a break from wondering what I wanted, and I did something about it instead, by cobbling together several snippets I'd already posted. So I'm going to push soon myself, and realistically I think I'll be more pleased with what I have than what you have. For instance, Mine is already well-integrated with my multival system, and it doesn't change any Arc internals, which would complicate Lathe's compatibility claims.
On the other hand, and at this moment it's really clear to me that implementing generic doppelgangers of arc.arc functions is a bummer when it comes to naming, and modifying the Arc internals to be more generic, like you've done (right?), could really make a difference. Maybe in places like that, your approach and my approach could form an especially potent combination.
I finally pushed this to Lathe. It's in the new arc/orc/ folder as two files, orc.orc and oiter.arc. The core is orc.arc, and oiter.arc is just a set of standard iteration utilities like 'oeach and 'opos which can be extended to support new datatypes.
The main feature of orc.arc is the 'ontype definition form, which makes it easy to define rules that dispatch on the type of the first argument. These rules are just like any other rules (as demonstrated in Lathe's arc/examples/multirule-demo.arc), but orc.arc also installs a preference rule that automatically prioritizes 'ontype rules based on an inheritance table.
It was easy to define 'ontype, so I think it should be easy enough to define variants of 'ontype that handle multiple dispatch or dispatching on things other than type (like value [0! = 1], dimension [max { 2 } = 2], or number of arguments [atan( 3, 4 ) = atan( 3/4 )]). If they all boil down to the same kinds of rules, it should also be possible to use multiple styles of dispatch for the same method, resolving any ambiguities with explicit preference rules. So even though 'ontype itself may be limited to single dispatch and dispatching on type, it's part of a system that isn't.
Still, I'm not particularly sure orc.arc is that helpful, 'cause I don't even know what I'd use it for. I think I'll only discover its shortcomings and its best applications once I try using it to help port some of my Groovy code to Arc.
And yes, I did modify arc's internals to be more generic. Basically, I replaced the vectors pg used for typing with lists and added the ability to have multiple types in the list at once. Since 'type returns the whole list, and 'coerce looks for conversion based on each element in order, we get a simple form of inheritance and polymorphism, and objects can be typed without losing the option of being treated like their parents.
arc> (macex-all '(fn (do re mi) (+ do re mi)))
(fn ((fn () re mi)) (+ do re mi))
arc> (macex-all ''(do not expand this -- it is quoted))
(quote ((fn () not expand this -- it is quoted)))
I still don't follow. A) Why do we need "ignores" here? B) Why would we need to use the declaration outside of the definition? Expressions can be sequenced. I.e., wouldn't you do something like this?
(def f (x y)
(let i 3
(declare integer i) ; this is evaluated and sets metadata...
(something-with x y i))) ; ...but THIS is the value that gets returned
Similarly,
(def foo (bar)
(let baz 10
(prn "hello") ; evaluates and returns the string "hello"
(+ bar baz))) ; evaluates and returns bar + 10
arc> (foo 5) ; prints hello and returns 15
hello
15
I think fallintothis declared i as an integer because ey was trying to translate your code, which declares int_value as an integer.
So the 'temporary call serves to tell the 'do in 'declare not to return the value from 'undeclare, but instead to return the value from the (= temporary* (do ,@body)) line?
This seems a lot more complicated than simply saving the value in a 'let or using do1.
Even fixing that, the metadata-setting happens at macroexpansion time, so you get
arc> (def f (x) (declare x integer (prn "metadata*: " metadata*) (+ x 5)))
#<procedure: f>
arc> metadata*
#hash()
arc> (f 5)
metadata*: #hash()
10
arc> metadata*
#hash()
At no point before, after, or in the body is the metadata actually in the hash table. It was just there for a brief pause between the macroexpansions of declare and undeclare.
But the last line shows we just wipe any declaration we made, so a global metadata table gets messy, unless we make the declarations themselves global (i.e., get rid of body).
It was just there for a brief pause between the macroexpansions of declare and undeclare.
If we want to change the behavior of other macros for a certain region of code, then that pattern might be useful. Since we seem to be talking about static type declarations, which I presume would be taken into account at macro-expansion time, I think the "between the macroexpansions" behavior is the whole point.
Thank you for the insight. It's probably the most lucid I've been all thread. It didn't seem deliberate to me, but it could have feasibly been written that way to control other macros' expansions. This also pushes computation to expansion time, which might clarify ylando's objections about "wasting run time". Except those still confuse me: macro expansion happens once, inside a function's body or outside of it.
arc> (mac m (expr)
(prn "macro m has expanded")
expr)
#(tagged mac #<procedure: m>)
arc> (def f (x)
(m (+ x 1)))
macro m has expanded
#<procedure: f>
arc> (f 1)
2
But the original point seems lost because declare's story keeps changing. So, ylando: why do we need "ignores"?
Try building a macro that change global value,
expand code (with macros) and then change the value back.
I think that this macro must use another macro to
change the value back; like the undeclare macro above.
The second macro expands into unnecessary code; so
if you put it inside a function this unnecessary code
will waste run time.
If we have "ignore" macro, we can write macros that do not
produce unnecessary code.
This introduces a redundant nil in the after block, and using after is a bit slower than just a do1. But we can't use do1 because this "do all the work at macro-expansion" approach is so touchy that it breaks:
arc> (load "macdebug.arc") ; see http://arclanguage.org/item?id=11806
nil
arc> (macwalk '(declare name prop a b c))
Expression --> (declare name prop a b c)
macwalk> :s
Macro Expansion ==>
(do1 (do a b c)
(undeclare name nil))
macwalk> :s
Macro Expansion ==>
(let gs2418 (do a b c)
(undeclare name nil)
gs2418)
macwalk> :s
Macro Expansion ==>
(with (gs2418 (do a b c))
(undeclare name nil)
gs2418)
macwalk> :s
Macro Expansion ==>
((fn (gs2418)
(undeclare name nil)
gs2418)
(do a b c))
macwalk> :s
Subexpression -->
(fn (gs2418)
(undeclare name nil)
gs2418)
macwalk> :s
Subexpression --> (undeclare name nil)
macwalk> :s
Value ==> nil
Value ==> gs2418
Value ==> (fn (gs2418) nil gs2418)
Subexpression --> (do a b c)
macwalk> :a
Value ==> (do a b c)
Value ==>
((fn (gs2418) nil gs2418) (do a b c))
((fn (gs2418) nil gs2418) (do a b c))
Note that we reach undeclare before the actual body is expanded!
We can hack it without after or do1 (or mutation, but I avoid that anyway).
This way, declare expands in the right order and we only undeclare once, since it'll expand into nil. The nil is "unnecessary", which seems to be why you want ignore, but it's a terribly pedantic point: ignore is already accomplished by dead code elimination (http://en.wikipedia.org/wiki/Dead_code_elimination). This isn't even a case of "sufficiently smart compilers" for vanilla Arc, since mzscheme already implements the standard optimizations: function inlining, dead code elimination, constant propagation/folding, etc. (see http://download.plt-scheme.org/doc/html/guide/performance.ht...) should all be able to clean up whatever ac.scm generates. E.g.,
(mac foo ()
`(prn ',metadata*!name))
(declare name bar (foo))
Final idea: if expansion-time computation can't be avoided, you can expand the macros manually, if only for the sake of your readers. As a bonus, it does away with the dead code.
The first problem is that it is a one liner and some times they hide nasty bugs
As a corollary to "sometimes code hides nasty bugs". ;)
One-liners aren't intrinsically bug-prone. I'd even argue that they're often less buggy, just because there's less code to get wrong. Akkartik's problem is actually an example: the issue was data structure choice, and the fixed code was still one line.
The second is that it is in a reverse order
Depends on who you ask. Nested function calls read fine to me, but people have built entire languages to avoid them (e.g., http://factorcode.org/).
"I still think that the bug in akkartik code is a result of too complicated one liner."
I'll make 2 objections to that:
a) That particular case was not a bug, but a performance issue.
b) The response to bugs isn't a more verbose formulation. Verbosity has its own costs to pay. Patterns that you could see in a single screen can no longer fit side by side, which can cause their own bugs.
If one-liners are to be avoided, you could just replace the call to reduce in your example with an explicit loop. But that's a bad idea, right?
Perhaps you're finding right-to-left hard to read. Stick with it; you'll find that it becomes easier to read with practice. Many of us started out with similar limitations. It's like learning to ride a bicycle; I can't explain why it was hard before and isn't anymore, but vast numbers of people had the same experience and you will very probably have it too. As you read more code you'll be able to read dense one-liners more easily. There is indeed a bound on how dense a line should be, but this example is nowhere near it.
I'm not convinced having namespaces doesn't encourage bloat.
Even if they did, what's the alternative? Implementation details leak all over the place in a single namespace. And if your project is large enough, you're going to wind up with many little functions & macros (lest you have one giant main function). Even arc.arc winds up exposing things like parse-format, insert-sorted, and reinsert-sorted.
I don't care if it's implemented library-level: I just need a way to keep innards internal. Thus far, I've been using ad hoc methods like
(mac provide (public . body)
(let (locals new-body) nil
(each expr body
(case (acons&car expr)
def (let (name . rest) (cdr expr)
(unless (mem name public)
(push name locals)
(= expr `(assign ,name (fn ,@rest)))))
= (each (var val) (pair (cdr expr))
(if (~mem var public)
(push var locals))))
(push expr new-body))
`(let ,locals nil
,@(rev new-body))))
But this breaks on macros -- both their local binding (cf. http://arclanguage.org/item?id=11685) and (since it's ad hoc) those that might expand into assignment forms, like defmemo or defs or def inside of a let.
At the risk of sounding like a broken record, I think Lathe's namespaces already embody lots of the ideas people are talking about here. :-p
In this case, Lathe provides two forms in its more-module-stuff.arc module, 'packing and 'pack-hiding, which work like 'packed but only put certain parts of the "my" namespace into the package object. That way, the internals don't get imported.
The 'packing and 'pack-hiding forms are in a separate module only because they aren't fundamental to Lathe, but in fact, I've never actually wanted to use them. Just having separate namespaces is enough for me, 'cause when I want to have unobtrusive definitions, I can just create a throwaway namespace to put them in.
The main point of this in Lathe is so that the form can clean up after itself using an 'after form. The return value capability is also nice.
An alternate namespace implementation might take this format so that it could search-and-replace names in its body at macro-expansion time. That was my original plan for Lathe's namespaces, but I soon realized a simplistic code walker wouldn't do, and I didn't really want to write a sophisticated code walker unless I had a whole new language in mind. Also, I doubt this approach would translate very well to the REPL.
"I'm not convinced having namespaces doesn't encourage bloat."
"Even if they did, what's the alternative? If your project is large enough.."
I think the point is that if you don't have namespaces you'll be more careful to keep projects concise. It's not always a good response, but the skunkworks ethos pretty much permeates all of arc.
Paraphrasing: if you don't have chocolate, you'll be more careful to keep cakes not-chocolatey.
Having one namespace isn't about concision any more than large == bloated (largeness is necessary but insufficient for bloat). Some code (short or long) lends itself to one namespace, some doesn't. And bashing the latter into one namespace doesn't make it concise.
Take arc.arc. It has a lot of code, but fits in one namespace because it rarely defines a function for another's sake. Functions/macros are usually either mutually exclusive library utilities, or were supposed to be exposed anyways (e.g., loop is used to define for, but it's okay, because we wanted loop anyways). Even so, there are cases like =, whose logic is spread across expand=, expand=list, setforms, metafn, and expand-metafn-call.
This versus http://arclanguage.org/item?id=11179, which provides (essentially) just sscontract, but is still large enough to naturally spread across functions that shouldn't be exposed (much like =). What would a "concise" sscontract be? One giant if statement with copy/pasted afns? At least with that method, all of the "bloat" like
I'm not sure what you mean. Are you suggesting that people write a namespace system each time they need one? Or that there should be some composable facilities that let people pick & choose the features they need? The latter kind of sounds like "y'know, namespaces, but done right", so I can hardly disagree with it. :P
Yeah, I frequently see people hacking together their own namespace systems, and either trying to go for the most complete and cumbersome system possible (handling dependencies, versions, etc.) or something that isn't general enough to be used more than once.
Maybe we should try and design a set of very basic namespace handling tools, and then allow users to extend off of them. Basic as in "See namespace. See namespace hold names. See namespace export names for use" If we make them simple enough, and generic enough, it should be possible to add whatever other features are necessary later.
Right now the only hard part about implementing namespaces seems to me to be support for macros. Anyone have any ideas on how to allow macro indirection via namespaces without having first class macros? Or maybe just a good way to handle first class macros themselves?
Anyone have any ideas on how to allow macro indirection via namespaces without having first class macros?
Lathe's approach (where namespaces are friendly-name-to-unfriendly-global-name tables encapsulated by macros):
arc> (use-rels-as ut (+ lathe-dir* "utils.arc"))
#(tagged mac #<procedure: nspace-indirect>)
arc> (macex1 '(ut (xloop a list.7 b 0 a.b)))
(gs2012-xloop a list.7 b 0 a.b)
arc> (macex '(ut (xloop a list.7 b 0 a.b)))
((rfn next (a b) a.b) list.7 0)
arc> (ut:xloop a list.7 b 0 a.b)
7
Maybe we should try and design a set of very basic namespace handling tools, and then allow users to extend off of them.
Funny, that's part of what I had in mind as I made Lathe's module system. :-p Is there some aspect of Lathe's namespace handling that's inconsistent with what you have in mind? The point of the Lathe module system is mainly to keep the rest of the Lathe utilities safe from name conflicts, so I'll gladly swap it out if we can come up with a better approach.
Well, I think people could write their own to suit their tastes. It seems to me you'd only really do this once. The exception would be a big fat project which wanted to use its own namespace mechanism; if you wanted to do something within such a project you'd probably bend to the will of how that project does things.
I will explain:
1) I think we should separate the definition and the
assignment some thing like:
use strict;
my x; # declare var for the first time
x=3; # assign a value to a var
in perl.
3) Suppose we want to make an object oriented
function; we can write a function
(def myfunc (this arg1 arg2) ...)
If we had alias we could write a macro that expand to
(w/alias (var this.var var2 this.var2)
(def myfunc (this arg1 arg2) ...))
1) I think you're missing the whole point of 'let.
arc> (= x 5)
5
arc>
(let x 3 ; "declare" a var with value 3
(= x (* x x))
(+ x 2))
11
arc> x
5
If you want to separate the declaration from the initial value... why? What happens if you use the variable in between those two times?
2) For what it's worth, my Lathe library provides a certain sort of namespaces, and I've been using those pretty happily. (http://arclanguage.org/item?id=11610) But of course I'd be happy with them, 'cause I'm their author. :-p
That said, I think 'symbol-macro-let would be nifty. I wonder if it could be even more useful to have some way to build a lambda whose free variables were treated in a custom way (as opposed to just being globals). Either one of these could be a basis for scoped importing of namespaces.