Arc's square bracket syntax is implemented at the reader level. Try it out yourself:
arc> '[a b c]
Regular Arc and Anarki should have different results for this, but neither is the list (a b c), which is how everyday Racket reads it. That is to say, Racket treats square brackets and parens identically in this respect. If you just use parens instead of square brackets, it'll probably work.
You can use parens in Racket too; it's just that the prevailing style is to use square brackets in certain places. Presumably it's for readability, but I think the best arguments for its readability are that it stands out and that it's what everybody already expects, and those reasons don't thrill me much. :)
Just back coding in arc again after big break. And using Clojure hasn't helped in this regard (square brackets are vectors easily swapped in and out for list structures). I need a little tuning - lol.
I wouldn't mind hearing more about your experiences with Clojure sometime. Any features from Clojure you especially miss when using Arc (or vice-versa)?
I'm glad you asked. I recently completed 2 applications in Clojure. My Clojure code is heavily influenced by Arc, to the point I ended up with a bunch of libraries creating what I consider to be a hybrid (even though really it's Clojure).
I'll give some high level thoughts:
* I miss the intra symbol syntax.
* I miss arcs anonymous fn short form [ _ ], though I
like Clojure's version too.
* I miss using : to compose functions (reducing brackets)
* I miss mutability for it's ease, but I'm glad to have transactions.
* I prefer Clojures 'def' for binding objects over '='
(makes it easier reading when switching between languages)
* I love having so many libraries in Clojure.
* I love having the option to call java libraries
from within Clojure without needing to be a java
expert.
* I love having namespaces
* Clojure metadata is useful, and I never used Arcs tagging.
* I'm really happy that I can easily create a source protected distributable
application file that can be delivered as a .exe or .app
It's worth noting that Clojure has a double edge sword. It's feature rich which is awesome, however the cost is a bunch of complexity that's really not necessary making the learning curve and knowledge requirement higher (i.e. not needing to be smarter, just able to hold more in your head.)
For example, I'm glad Clojure has so many data structure options like cons, lists, and vectors and so on - which is sweet because it makes code really readable, but then the functions behave differently on differing data structures. Functions like 'conj' (the same thing as arcs join) will push x on to the front of a list, only it will push x on at the end of a vector. You have to keep that crap in your head while coding. I often think to myself, Why didn't they just keep it simple?.
These kinds of problems are exacerbated by the fact that all iterator and map/reduce type functions always return cons. Which at first glance might seem OK right (i.e. you pass in a vector and end up with a list), but in your functions you never actually see the list, it's just a result or by-product that you use passing into some other function. So for me, I read vector, but need to think list, and know things like conj are doing particular things under the hood.
The rest is pretty much the same (to me). The only other thing I will say is that many difficult problems (not language problems, but library or project problems) are solved in Clojure - which was really important to me. If you can remember, I'm the guy who only knew one language.
These kinds of problems are exacerbated by the fact that all iterator and map/reduce type functions always return cons. Which at first glance might seem OK right (i.e. you pass in a vector and end up with a list), but in your functions you never actually see the list, it's just a result or by-product that you use passing into some other function. So for me, I read vector, but need to think list, and know things like conj are doing particular things under the hood.
What if the functions did behave the same on different data structures? Would that make having map and iterator functions always return cons less confusing?
The reason I ask is somethings I think about things like having car and cdr work on strings... and functions that take string arguments also accept lists of characters. This would simplify some code like map, because you could pass it any object that could be treated like a list, and it could cdr its way through without having to have special code to check whether it had been passed a string. However it would then always return a list; that is,
> "Would that make having map and iterator functions always return cons less confusing?"
Yes it would/does. When I created my arc libraries, I built the normalization in. So I have a join function, which detects the type and applies the appropriate operation. I don't use conj, because I don't want to think about those things.
Interesting that you picked a string example, since that's exactly how Clojure already works. That is, using map on a string returns a list of chars. Often I need to:
(apply str (map #{upcase %} "abc"))
For whatever reason I don't mind this, I expect strings get converted to lists.
Here, seeing only the vectors I would get caught thinking it would produce (1 2 3 4 5 6) instead you would get (6 5 4 1 2 3).
Of course I went and created two join functions with my arc functions.
join : always adds to left side list or vector
join> : always adds to right side list or vector
(let [myVector (atom [1 2 3])]
(each x [4 5 6]
(reset! myVector (join> x @myVector)))
@myVector)
And I no longer have problems, I can see what's happening at the top level and it's consistent.
So in the end it's not such a big deal when your willing to normalize all the functions.
[note: I never tried out the code, it may not work, but you get the idea]
I have a generic 'map design on my mind, and it may or may not be related to your 'deftransform approach. The idea is to have another method that takes two arguments: A value to convert into a lazy list and back (or seq, or stream, or whatever you want to call it ^^ ), and a function to transform the value while it's a lazy list. Then implementing 'map is just a matter of implementing it for lazy lists.
Technically, the intermediate value doesn't need to be of any particular type, just as long as it supports lazy list style iteration--another generic method. That generic method's what I'm calling "ifdecap". So with cons lists, for instance, you can have them implement 'ifdecap, and then the conversion-and-back method doesn't need to make any conversion to a lazy list (but it still needs to convert back).
A lot of this design is based on conversations we've had here before.
This isn't one of the conversations I was thinking of, but it's relevant to many of the things we've been talking about recently: http://arclanguage.org/item?id=8249
Here's another place an extensible 'map was brought up, which wasn't one of the conversations I was thinking about either. XD http://arclanguage.org/item?id=4141
Yeah wart now follows that recommendation for what to return. But it's still not clear how map should operate on new sequence types. Your second link suggests the idea from ruby that map rely on each. Hmm, I haven't built each yet in wart, and I've felt the fleeting need several times..
FWIW, I implement 'each based on 'some. Iterating with 'some has the advantage that you can break the loop early in a natural way, rather than with some tacked-on extra feature. Although 'all allows this too, and it might appear to be an equivalent starting point, you can only break an 'all loop using nil, and so the only possible results of (all ...) are nil and t, unlike 'some, which can return any value. That makes it easier to implement 'all in terms of 'some than vice versa; the (no:some no:func xs) approach loses no information.
Furthermore, 'some lets you implement 'each and 'all without resorting to mutation. This makes the interaction with continuations is a little less frustrating (IMO), as long as the types you're using have continuation-friendly implementations of 'some.
---
But it's still not clear how map should operate on new sequence types.
It's clear to me, at least. :-p Under my design, it's up to the type's designer to decide what it means to convert their type to a lazy list and back. I think I'd suggest erring on the side of converting back to a picky container (e.g. a string, which can only hold characters) even at the expense of errors, 'cause someone who wants to get around the errors can just convert their picky container into a lazy list before passing it to 'map, and convert it back to whatever lenient container they need after that.
Hmm, I wonder if each is a good way to encode the lazy-list conversion. Perhaps it makes sense to make each generic, and map just use each.
---
As a reminder of how much I still have to learn, I just spent 2 days off and on trying to get lexical functions to take precedence over macros in wart before realizing that it makes no sense; macros get expanded long before the lexical scope is even created.
I vaguely remember a comment that this is possible by changing 2 lines in the arc interpreter. Now I know that they make the resulting language impossible to compile :/
Since ac crawls over the s-expression transforming everything into Scheme, whenever it sees a fn (which is what creates lexical bindings), it keeps track of what variables are being introduced in that scope with the env argument. E.g., see the last line of the function
(define (ac-complex-fn args body env)
(let* ((ra (ar-gensym))
(z (ac-complex-args args env ra #t)))
`(lambda ,ra
(let* ,z
,@(ac-body* body (append (ac-complex-getargs z) env))))))
So it's entirely possible to compile things as function calls (instead of macro calls) when a variable's lexically bound. But this relies on how macros can't be lexically bound in Arc (no macrolet equivalent). On the other hand, first-class macros might be defined at runtime, and the compiler can't figure out whether something that looks like a function call is actually a macro call, hence forcing interpretation: http://arclanguage.org/item?id=11517.
It is theoretically possible to do a form of JIT compilation, where you compile the normal function (presuming function calls), but keep the source around and dynamically compile and link in branches of code for when the argument is a macro.
Basically, you'd have a jump table based on the argument. If it was a function, you'd jump to the function version. If it was a macro, you'd look it up in a hash table to see if it had been used before. If not, compile the function again and add the new one to the hash table.
Obviously, this wouldn't be as fast as a normal function call, since you'd have to do runtime checking of one of the arguments, but if you were careful the common case could be quite fast, with only the macro calls being slow and only new macro calls taking more than a small amount of time.
Of course, this is only one possible implementation. It is true that any language that supports macros could not be entirely statically compiled, but then again as far as I know any higher-order functional language requires runtime compilation as well. It partly depends on how you define "compilation."
It is true that any language that supports macros could not be entirely statically compiled
Actually, in a language with no side effects, I don't think there's anything keeping run time level code from being used at compile time, as long as there's already enough information to compile it. Who would ever know? ;) This means you have the most unique benefits of macros, that of being able to program them in the same language and environment as the rest of the program.
This is the basis behind Blade, an extensible statically compiled language I was making before I realized the lack of side effects made it really hard for me to write code for. :-p
> before I realized the lack of side effects made it really hard for me to write code for. :-p
So that's why you stopped working on Blade. I was curious! ^_^ Can you speak more about your realization that side effect-free programming is impractical? I've sometimes found myself tempted by pure FP, you see.
[PRO TIP: Click "link" (or "parent") on this comment, so you can view it (mostly) by itself rather than squashed against the right side of the page.]
For me, when I was writing my spider solitaire program, I had a couple of global variables, the-deck and the-piles, which represented the state of the game. the-deck was a list of randomly permuted cards, which were integers from 0 to 51 (or 0-25 with two suits, or 0-12 with one suit). the-piles was a list of piles, which were lists of cards (the zeroth element was the top of the pile), and the cards were of the form (list card revealed?), where "card" = 0-51 and "revealed" = t/nil. (Yes, a card is an integer in the deck, but a list of an integer and t/nil on the board. This does not need to be changed.)
To flip over the top card of the nth pile, I went:
(= (cadar the-piles.n) t)
To deal a card (face up) to top of each pile from the deck, I said:
(forlen i the-piles
(push (list pop.the-deck t) the-piles.i))
To move a chain of n cards from (the top of) one pile to another, I went:
(let (a b) (split the-piles.i n)
(= the-piles.i b
the-piles.j (join a the-piles.j)))
Also, I created a global variable called "prev-states", and every move the user performed would
(let (a b) pop.prev-states
(= the-deck a the-piles b))
. Now, what would you do in a "pure functional programming" language? If there were no assignment whatsoever, not even to global variables, I'd probably effectively implement global state by making every function take an extra argument representing global state, and shifting code around in other ways. If you could assign to global variables but all data structures were immutable (I think Clojure is like that, correct me if I'm wrong), then I'd have to keep rebinding the entire "the-piles" list when all I wanted to do was alter one or two piles. Revealing the top card of the nth pile would look something like this:
(= the-piles
(join (take (- n 1) the-piles)
(list:cons (list (car the-piles.n) t) (cdr the-piles.n))
(drop n the-piles)))
(Note that "push" and "pop" are legal because they just do global assignment; they don't alter any data structures). And then moving a chain of n cards from i to j would look like this:
(= the-piles
; ... oh god I have to know whether i>j
;(join (take (dec:min i j) the-piles)
; ... egad I may as well just say (if (< i j) ...
;(if (< i j)
; (join (take dec.i the-piles)
; (list:drop n the-piles.i)
; (cut the-piles i dec.j)
; (list:join (take n the-piles.i) the-piles.j)
; (drop j the-piles))
; (join ... egad the repetition is intolerable
; hmm, this'll work
(with (new-i (list:drop n the-piles.i)
new-j (list:join (take n the-piles.i) the-piles.j))
(if (< i j)
(join (take dec.i the-piles)
new-i
(cut the-piles i dec.j)
new-j
(drop j the-piles))
(join (take dec.j the-piles)
new-j
(cut the-piles j dec.i)
new-i
(drop i the-piles)))))
On the plus side, I would no longer need to use "deepcopy" when creating a backup of the game state.
(push (list the-deck the-piles) prev-states)
Which is pretty much the only benefit I'd get from outlawing modification of data structures, which I assume is what "pure functional programming" would mean. Don't even talk to me about how this would look if I had to simulate global variables...
Functional programming is a useful tactic in some cases. For example, I use a "chain-length" function, which might, say, take the list (7♥ 8♥ 9♣) and return 2, the number of revealed cards of consecutive rank of the same suit. I first thought about making it take "i" as an argument, referring to the ith pile in the-piles, but I made it take the list "xs" instead, and that allowed me to use it for other purposes: e.g. determining whether a move of n cards from pile i to j will create a longer single-suited chain. (Like, moving that 7♥ 8♥ chain onto a 9♥. I have a couple of AI-assistance procedures that look for moves satisfying conditions like that.) The expression is:
(is (+ n chain-length:the-piles.j)
(chain-length:join (take n the-piles.i) the-piles.j))
Had I written "chain-length" to refer just to the ith pile in the global "the-piles" variable, I'd... basically have to rewrite most of it. So factoring out things into individual functions (which, btw, is, I think, in the category of things called "functional programming", but isn't usually the same as making things side-effect-free) is frequently a good thing--it makes the program easier to write, to read, to understand, and to maintain.
But the goal, the reason you'd want to do it, is not because it's called "functional programming"--it's to make the program easier to write/read/understand/maintain. And if something called "functional programming" makes it harder to do those things, then I would advocate throwing it out. I think preventing all side effects, or all side effects except those on global variables, falls firmly into the latter category; I think my example demonstrates this.
; Move a chain of n cards from i to j.
(zap [let (taken leftovers) (split _.i n)
(copy _ i leftovers j (join taken _.j))]
the-piles)
; Reveal the top card of the nth pile.
(def copdate (orig . kvs)
(apply copy orig (mappend [list _.0 (_.1:orig _.0)] pair.kvs)))
(zap [copdate _ n [cons (list _.0 t) cdr._]] the-piles)
In principle I agree, disabling the programmer's ability to use mutation without reason is just frustrating. But I don't expect an example to help show this; most examples we could come up with would probably have easy-in-hindsight solutions like this. I think the point is that mutation gives us more easy ways to do things, so that we can quickly experiment and stuff.
I see you want to push the example to its limit a bit. ^_^ Well, I assume this hypothetical language would be stocked with utilities that made up for what its pureness missed out on. How believable are these?
That's definitely what I meant, and I was going to just say so, but whan I saw the ":)" I realized akkartik probably knew that and meant something like "that may not be 'oh god' complicated, but it's still not nearly as convenient as the mutating code." I don't know if that's a correct interpretation, but it made it more interesting to respond. ^_^
Yeah, I like the way you handle the-deck and the-piles. And I like this style of programming in general (i.e. a few global variables to keep state, functions corresponding to verbs in the problem space that manipulate the globals in an intuitive way). You've reminded me of how preventing all side effects could get awkward by ruling out this style of programming entirely.
To clarify, I'm not mostly interested in eliminating side effects because of some obsession with purity, but rather for performance sake. I've been pursuing so many lang design choices like fexprs/first-class macros that are supposed to really cripple performance, that I'm starting to worry my hobby implementations aren't even going to run. I'd heard that Haskell and Clojure get a big performance boost from enforced immutability/no side effects because it guarantees that most operations can safely be run concurrently, so I've just considered it an option.
I'm a real novice when it comes to optimization, and I'm certainly interested in techniques that could help me get decent performance without having to sacrifice mutability. TCO, strategic memoization, compiler hints? Any tips on this front could be really useful to me.
Aww, you know I can't resist talking about my hobby horses. XD
Blade is written in Groovy right now, and to make partial evaluation possible, everything is done in continuation-passing style. Partial calculations' continuations are held in stasis until the definitions they're waiting for become available. I guess it's related to cooperative multithreading and single-assignment variables, but I'm kinda playing it by ear.
Back when my work on Blade was petering off, the main reason was that Groovy's syntax wasn't very friendly for continuation-passing style. I really missed macros, and I wasn't in the mood for writing a complicated Groovy AST transformation. I wanted to just port it to Arc or something, but I had trouble writing Arc code itself without getting hung up in all the abstraction leaks (particularly unhygienic macros) and absent features (particularly OO-style type-based dispatch and weak references).
Meanwhile, thanks to ranting about Blade here, I realized it wouldn't be a good REPL language. Single-assignment has gotta paint people into corners at a REPL, and the alternative would be to compile the whole application over and over, using some meta-runtime to orchestrate the compiles. But what runtime?
Also, I began to suspect Bllade's design itself would result in something ugly to use and possibly even unusable. A declaration's behavior would depend on definitions, and a definition's value would be what the declararions said it was, and naturally there'd be deadlocks. The deadlocks were easy to avoid in toy examples, but I didn't know what limitations and gotchas Blade would develop once it matured. (I still wonder about that.)
So a plan fell into place: I could pursue a language that had similar high hopes for extensibility as Blade, but in a REPL-friendly form. That language's evaluation model would be much more familiar to me, and it would be closer to any of the languages I might choose to implement it in, so I'd probably be able to implement it faster and more confidently. Then I could use the resulting language--and the lessons I learned from it--to experiment with different variations on CPS and finish Blade.
So I started working on that language, and I called it Penknife. ^^
Altogether, I don't think this was based on any bad experience with pure functional programming. It was more about my expectation of those bad experiences, together with the frustration of writing CPS code in longhand Groovy.
> Meanwhile, thanks to ranting about Blade here, I realized it wouldn't be a good REPL language. Single-assignment has gotta paint people into corners at a REPL,
> So a plan fell into place: I could pursue a language that had similar high hopes for extensibility as Blade, but in a REPL-friendly form.
So the REPL was a large part of it. I've heard too that single-assignment is problematic for REPLs, but I don't quite understand why. Can't you just use let everywhere you'd use = and def ordinarily?
; Scribbling at the REPL
arc> (= x 5)
5
arc> (def add1 (n) (+ n 1))
#<procedure: add1>
arc> (add1 x)
6
; Single-assignment version
arc> (let x 5)
nil
arc> (let add1 [+ _ 1])
nil
arc> (let x 5
(let add1 [+ _ 1]
(add1 x)))
6
REPLs should provide a convenient way to grab commands from the history anyway, so you can just keep building on your previous let. It is more cumbersome, but you also avoid some of the nasty surprises that come after you've accumulated a complex state in your REPL.
Could this style of REPL'ing address the problems posed to REPLs by single-assignment, or are there larger problems I'm ignoring?
Update: In the particular REPL example I've shown, there's no need for the style transformation since nothing gets redefined. Hopefully you can still gather what I mean from it! ^_^
Can't you just use let everywhere you'd use = and def ordinarily?
Well, single-assignment can have variables that are introduced without values and then assigned to later. It's a bit of a relaxation of pure FP that (I think) doesn't sacrifice referential transparency. Hmm... actually, a lambda that assigns to a binding outside itself is not referentially transparent, so oh well. :-p
In any case, Blade isn't directly supposed to be a single-assignment or pure sort of language. I just want to view the space of Blade declarations as a completely orderless set, letting the compilation of each one run in parallel semantically, and I still want it to have an intuitive kind of determinism. Inasmuch as I can orderlessly orchestrate side effects in ways I find bearably intuitive, I'll probably add side effects to Blade.
Blade side effects would also be limited by the fact that I want to build up to an IDE that rewinds the compile when you edit. In fact, I'm almost certain my current plan won't work well with that. I expect custom declaration syntaxes to be supported by core-syntax declarations that happen to branch based on the declarations they find by reading the project tree, but then editing any part of a file would invalidate the branching itself and cause recompilation of all the custom-syntax declarations in that file. Seems there'll need to be a more edit-resistant kind of branching. I already expect a bit of IDE cooperation with this--it doesn't have to be as hackish as diffing plain text--but that's where my ideas leave off.
...Actually, I've got a bit of an idea already. XD A declaration that observes a file's contents in order to branch can limit its observation to some method that doesn't rely on the contents of the individual declarations, and then each of the branches can observe just its own part of the file. The core syntax already works like this, essentially splitting the file based on appearances of "\n\n[".
Anyway, you've gotten me thinking and ranting. ^_^
---
Would this style of REPL'ing address the problems posed to it by single-assignment, or are there other problems I've ignored?
The kind of problem I expect to happen is that I'll define something once, I'll define something based on that, and then I'll realize the first thing had a bug, but I won't be able to redefine it as simply as I can in Arc. Once I do enter the correct version, I'll have to paste in all the things that depend on it too.
To draw a connection, it's similar to how when you want to redefine a macro in Arc, you have to re-enter all the code that used the macro in order for it to use the new one, whereas fexprs wouldn't have this problem.
Hmm, I wonder if each is a good way to encode the lazy-list conversion. Perhaps it makes sense to make each generic, and map just use each.
Feels a bit like you missed my point. Just in case, it was:
1. Make 'some, 'fn-ifdecap, and 'as-lazy-list generic.
; 'defgeneric versions
(some (fn (x) <test>) xs)
(fn-ifdecap (fn (x xs) <split case>) (fn () <empty case>) xs)
(as-lazy-list (fn (lazy-orig) <lazy result>) orig)
; the versions I currently have in my Penknife core library draft
[fn-any xs [fn [x] <test>]]
[fn-ifdecap xs [fn [x xs] <split case>] [fn [] <empty case>]]
[fn-decap-erism orig [fn [lazy-orig] <lazy result>]]
; I'll probably rename this to "call-as-seq".
; A "decap-er" is something that has fn-ifdecap behavior.
2. Base utilities like 'ifdecap (a macro), 'all, 'each, and 'map off of these.
The version of 'map in my draft is something like this:
[fun fn-map [seq func]
[rely:decap-erism lazy-seq seq
[nextlet rest lazy-seq ; aka xloop, or named let
[make-decap-er:fn [then else]
[ifdecap first rest rest
[then func.first next.rest]
else.]]]]]
No 'each is required, just 'fn and 'xloop. :) And the 'xloop is just there to keep from bothering with conversion when recurring on the tail.
Aside: The "rely" here is supposed to make the result of fn-map undefined if the result of decap-erism is undefined. That part probably isn't easy to translate to Arc; I essentially have an extra implicit parameter in every Penknife function call, specifying what to try instead if the call is rejected. It's kind of a tacked-on thing, and I'd avoid it if I had some better idea, but it seems to help immensely with writing extensible utilities.
> Now I know that they make the resulting language impossible to compile :/
Is that a bad thing?
Update: What I mean is, after the recently-shared LtU thread about fexprs [1], I've had trouble understanding why compilation is so often considered a must-have feature. If a language is difficult or impossible to compile, why not just interpret it?
Interpretation seems fitting to me for a language like arc whose focus is on exploratory programming and axiomatic language design.
I'm starting to think that if a language is compilable by default (i.e. without compiler hints from the programmer), it could be an indicator that the language isn't dynamic enough.
For example, arc's macros are fairly powerful as is, but people sometimes wish their scoping was more robust or that you could pass them around as function arguments. When it's learned that these features come at the cost of compilability, the usual reaction is to conclude that the features aren't worth it. I think the proper reaction (when expressivity is more important to you than performance) is to conclude that compilation isn't worth it.
My opinions on this stuff are fairly new and untested, so maybe one of the more experienced lispers here just needs to set me straight. ;)
I like compiled languages like Scheme because they let me be expressive. I can use whatever oddball syntaxes I want, safe in the knowledge that the parsing inefficiencies will only give my application horrible load time. :) Performance isn't nearly as important to me as expressiveness, but I do sometimes write programs for purposes other than thought experiments, and those generally need to finish what they're doing faster than the time it takes for me to give up and rewrite them. ^^ In those cases, I'm happier if I've chosen a language that gives me a fair amount of practicality and a good amount of expressiveness at the same time, rather than one that promises lots of expressiveness but forces me to give it all up in order to get things done.
Fexpr calls--I assume you're talking about those--are pretty much never what I actually want to express in syntax anyway. When I'm writing and reading code, I'm viewing it from a static standpoint, and so I choose syntaxes that make sense in static ways. If the very description of my algorithm means different things at different points in the execution of the program, that's darn confusing. XD (I suppose that would be Necker Cube code.) Not all fexpr-powered programs need to use them that way, but if they don't, what's the point?
Were I to be viewing the program in motion, it would be a different story. Then I'd want a very dynamic syntax, even one that shows me the values of variables and other dynamic information, while still presenting it all in a well-factored enough way that it's easy to digest. I'd do searches on things like "log file" and "data visualization" to find inspiration. But then I'd probably never write in that syntax. Since run time deals heavily in concretes at high speeds, I'd probably look for a way to interact using some kind of shell scripting language, or even a joystick input. :-p
I hope I haven't been too abrasive here. ^^; I actually have only a very weak conscious idea of why I like that Arc's compiled, and it took me several hours to collect my thoughts enough to post. Then this post kinda found itself all at once. ^_^
Not abrasive at all! :) Hope you don't mind if I respond in small pieces (i.e. some now, some later).
> Not all fexpr-powered programs need to use them that way, but if they don't, what's the point?
Am I incorrect in thinking that arc macros are a subset of fexprs in terms of expressiveness? You should be able to define 'mac in terms of an fexpr and eval, and then do all the macro stuff you're used to doing. A disadvantage would be the increased run-time penalty. Advantages could include:
1. Lexical scoping now applies to macros, so you no longer have to use that do.foo pattern to avoid accidental macro-expansion (http://arclanguage.org/item?id=11688)
2. Local and anonymous macros
3. Module system possibilities open up because you can now store callable macros in lists and tables
4. The language implementation can become smaller and more hackable with the elimination of the macro-expander
Do you find any of these to be particularly desirable, or do you have objections about any of them?
I'm reminded again of a post of mine I brought up a while ago, http://arclanguage.org/item?id=11684. [1] The only point I raise against fexprs there--and interpretation of quoted code in general--is that it gives up static optimization and verification. ^_^ This is part of why it took me a while to form a response initially; my main objection to your argument was that the assumption "when expressivity is more important to you than performance" wasn't black and white for me, so I wasn't sure it would be a relevant response.
So I'll probably be a bit wishy-washy in the following responses. Our opinions aren't actually in conflict; they're just motivated by different premises.
Am I incorrect in thinking that arc macros are a subset of fexprs in terms of expressiveness?
I think so. I think the one semantic difference is that you can redefine an fexpr and count on all the existing uses of it to work using the new definition. But that's only a positive. :-p
---
1. Lexical scoping now applies to macros, so you no longer have to use that do.foo pattern to avoid accidental macro-expansion (http://arclanguage.org/item?id=11688)*
The most natural (IMO) way to add this to Arc would be to have the compiler pass around a complete local static environment, rather than just a list of the local variables. It could be as simple as replacing that list with an alist. Of course, then there'd need to be a 'w/mac form too.
I admit this solution adds complexity to the language core. That's issue #4, though. ^_^
---
3. Module system possibilities open up because you can now store callable macros in lists and tables
Yep, that's totally true. You can import such structures globally in either kind of language, especially if they support replacing the global environment so you don't clobber existing things (Penknife's approach), but local scope imports make the meanings of variables in their scope ambiguous until the import is performed at run time, and at that point, compile-time macros will have been missed.
Personally, I like knowing which variables I'm importing at the time I write the code, so even if I do want to locally import a bunch of variables all at once, perhaps for some macro, I find it tolerable to build a complete (with (a _!a b _!b c _!c) ...) form for that purpose. Alternatively I'd limit local imports to statically determinable sets of names (like forcing (w/import-from var-list foo ...) to use a predefined global variable 'var-list). Of course, these aren't as general. ^_^
--
4. The language implementation can become smaller and more hackable with the elimination of the macro-expander
That's also true. :) My opinion is that a smaller language implementation doesn't imply a more convenient language to use, even if it may be a more convenient language to maintain. If for some reason we want to compile some of our code (probably for performance, but also potentially for code generation), and the language already gives us a suitable framework to do that in, we don't have to build that framework ourselves. In fact, it's more convenient to simulate a simple framework (e.g. Arc) within a complicated one (e.g. Racket) than vice versa.
Yeah, I never use arc's tagging either, because it just isn't useful for anything other than what pg designed it for, namely differentiating between functions and macros.
Hopefully I'll be able to get my hack onto anarki soon so you guys can look at it and tell me if its an improvement at all.
Types are useful in proportion to the number of primitives that can handle them.
That's basically why I built defgeneric; at this point functions len, iso, keep, some, rem, all are all generics, which means they can be extended for new types. (http://github.com/akkartik/arc)
In wart the type system's baked in pretty early on, and I try to use it everywhere I possibly can. (http://github.com/akkartik/wart)
From your description at http://arclanguage.org/item?id=13474, I definitely think it's a step in the right direction. It has already influenced my ideas about how to implement typing in upcoming toy interpreters.