Arc Forumnew | comments | leaders | submitlogin
2 points by aw 2 days ago | link | parent | on: Creating Languages in Racket

The first question I'd ask is, what kind of language do you want to create?

That makes it a lot easier to answer a question like "Is X a good choice for Y?" It depends on Y! :-)

Then, along with asking here (which is fine), you might also want to ask the Racket folks. Go to https://racket-lang.org/ and scroll down to "Community". Then you can ask, "I'd like to create a language like Y, would Racket be a good choice? If so, how would I go about it?"

2 points by krapp 2 days ago | link | parent | on: Creating Languages in Racket

Would anyone recommend Racket for a toe-headed newbie who wants to learn how to implement their own language after ruining Arc for a bit? Language development is something I've wanted to do for a while but I have no background in type theory, compilers, or anything of that sort.

Into the void we go.

"Isn't there a dialect out there that uses a different bracket to mean 'close all open parens'? So that the example above would become (foo (a b c (d e f]. I can't quite place the memory."

Oh, apparently it's Interlisp! They call the ] a super-parenthesis.

http://bitsavers.trailing-edge.com/pdf/xerox/interlisp/Inter...

  The INTERLISP read program treats square brackets as 'super-parentheses': a
  right square bracket automatically supplies enough right parentheses to match
  back to the last left square bracket (in the expression being read), or if none
  has appeared, to match the first left parentheses,
  e.g.,    (A (B (C]=(A (B (C))),
           (A [B (C (D] E)=(A (B (C (D))) E).
Here's a document which goes over a variety of different notations (although the fact they say "there is no opening super-parenthesis in Lisp" seems to be inaccurate considering the above):

http://www.linguistics.fi/julkaisut/SKY2006_1/2.6.9.%20YLI-J...

They favor this approach, which is also the one that best matches the way I intend for Parendown to work:

"Krauwer and des Tombe (1981) proposed _condensed labelled bracketing_ that can be defined as follows. Special brackets (here we use angle brackets) mark those initial and final branches that allow an omission of a bracket on one side in their realized markup. The omission is possible on the side where a normal bracket (square bracket) indicates, as a side-effect, the boundary of the phrase covered by the branch. For example, bracketing "[[A B] [C [D]]]" can be replaced with "[A B〉 〈C 〈D]" using this approach."

That approach includes what I would call a weak closing paren, 〉, but I've consciously left this out of Parendown. It isn't nearly as useful in a Lispy language (where lists usually begin with operator symbols, not lists), and the easiest way to add it in a left-to-right reader macro system like Racket's would be to replace the existing open paren syntax to anticipate and process these weak closing parens, rather than non-invasively extending Racket's syntax with one more macro.

2 points by i4cu 4 days ago | link | parent | on: Ask AF: Advantages of alists?

If you have a small number of key/value pairs and care about order then alists are both efficient and practical, but otherwise I would not give them too much weight.

So an example I can give is found in html.arc. The start-tag function has an alist used for generating tag attributes:

  (def start-tag (spec)
    (if (atom spec)
        `(pr ,(string "<" spec ">"))
        (let opts (tag-options (car spec) (pair (cdr spec)))
          (if (all [isa _ 'string] opts)
              `(pr ,(string "<" (car spec) (apply string opts) ">"))
              `(do (pr ,(string "<" (car spec)))
                   ,@(map (fn (opt)
                            (if (isa opt 'string)
                                `(pr ,opt)
                                opt))
                          opts)
                   (pr ">"))))))
Notice (pair (cdr spec)) is an alist. Now if you wanted to extend start-tag with conditional operations on the tag spec, you could bind (pair (cdr spec)) to say var 'attrs' and use arcs built in functions to inspect and modify as you see fit. As you can see a table is probably not needed when there's only a half-dozen items (at max) plus you would lose the order. A list would prevent you from pairing up to do anything meaningful like an inspection or conditional modification.

It's worth noting that a serious downfall for alists is having no real means to detect a data type, because really it doesn't have one (unlike a table). You could inspect the first item attempting to detect the type that way, but the logic's soundness/efficiency breaks down pretty quickly in all but the simplest cases.

So, again, a small number of pairs where you are confident in the shape of the data and have total control of the data usage (i.e where and how it get's passed around) then they are good, but otherwise you would need a pretty good or nuanced reason for it.

Some of pgs other idea's [1] are: "having several that share the same tail, and preserve old values".

i.e. You could have some function that accumulates pairs (where some or many have the same key, but a different value). This is where you don't want the obvious behaviour a table provides (where the last one added wins), and/or you need previous entries. Noting that you can save an alist to a file, and reload them easily while still having access to the history of values for a given key, thus you're able to re-construct your operations based on historical data. You just can't do that with a table.

1. http://arclanguage.org/item?id=20865

3 points by akkartik 4 days ago | link | parent | on: Ask AF: Advantages of alists?

Just to recap my opinion that I gave you over chat: the advantage alists have is that they're simple to support. Along pretty much any other axis, particularly if they grew too long, you'd be better off using some other data structure. Which one would depend on the situation.

> You've got me curious now about how this relates to Amacx :)

Why, everything! :-) E.g. I start with: what if top level variables were implemented by an Arc table, and top level variable references were implemented by an Arc macro? That is, what if top level variables were built out of lower level language axioms, instead of being built in?

We end up with something that's kind of like modules, but doesn't do everything that we'd typically expect modules to do (though perhaps we could implement modules on top of them if we wanted to), and also does some things that modules don't do (for example we can load code into a different environment where the language primitive act differently).

To give a name to this thing that is kind of like modules but different, I called them "containers", because they're something you load code into.

Are containers useful? Well, I'm guessing it would depend on whether we'd ever want to want load code into different environments in our program. If we only want to load code once, and all we want is a module system, I imagine it'd probably be more straightforward to just implement a module system directly.

On the other hand, suppose we have a runtime that gives us some nifty features, but is slower than plain Arc. Now it seems like containers could turn out to be a useful idea. Perhaps I have some code that I want to load in plain Arc where it'll run fast, and other code that I want to run in the enhanced runtime and I don't mind that it's slower.

> I find that having tests allows me to start out in a sort of engineering mindset, in your terms, where I just get individual cases working one by one. But at the same time they keep me from growing too attached to a single implementation and leave me loose to try to think up more axiomatic generalizations over time.

Exactly!

This is the classic test driven development refactoring cycle: add features with tests, then refactor e.g. to remove duplicate code, and/or otherwise refactor to make the code more axiomatic.

Since "Any sufficiently complicated C or Fortran program contains an ad-hoc, informally-specified, bug-ridden, slow implementation of half of Common Lisp", one could, in theory, start with such a C or Fortran program and refactor towards an axiomatic approach until you had reinvented Lisp, with the program written in Lisp :-)

But in practice I think going the other way is sometimes necessary: that is, starting with some axioms, and seeing what can be implemented out of them.

I'm not sure why that is (why doesn't anyone keep refactoring a large C program and end up with Lisp?) but I suppose it might be because it's too cognitively difficult, or you end up at some kind of local maximum in your design, or something.

In any case, I find tests absolutely essential for working on Amacx... not just "nice to have" or "saves me a lot of time", but "impossible to do without them"!

2 points by i4cu 5 days ago | link | parent | on: Arc REPL in emacs using cider-mode

Related if considering nRepl for Arc.

nREPL: Beyond Clojure

https://metaredux.com/posts/2019/01/12/nrepl-beyond-clojure....


You've got me curious now about how this relates to Amacx :)

I find that having tests allows me to start out in a sort of engineering mindset, in your terms, where I just get individual cases working one by one. But at the same time they keep me from growing too attached to a single implementation and leave me loose to try to think up more axiomatic generalizations over time. You can kinda see that in http://akkartik.name/post/list-comprehensions-in-anarki if you squint a little; I don't show my various working copies there, but the case analysis I describe does faithfully show how I started out thinking about the problem, before the clean general solution suddenly fell out in a flash of insight.

(Having tests doesn't preclude more systematic thinking about the space, and proving to myself that a program is correct. But even if I've proved a program correct to myself I still want to retain the tests. My proofs have been shown up too often in the past ^_^)


It's interesting to be taking an axiomatic approach. That is, in this case, to add to the language axioms that expressions can be labeled with their source file locations.

It might not work: it might turn out that the feature I want (to be able to track source locations through macro expansions) can't be expressed in terms of this particular set of axioms. Or, it might be that it can, but the result is a runtime too slow for me to want to use it.

But, if it does work, it has its own internal logic. What does (cdr x) mean when x has been labeled with source locations? Well, clearly, what it ought to mean is the tail of x, labeled with the source locations of the tail of x. Theorems such as (apply (fn args args) xs) ≡ xs should continue to work.

On the other end of the spectrum from an axiomatic approach is engineering. Have a list of features you want, and design a system that implements all of them. This too might fail sometimes (perhaps the features you want turn out to be incompatible, or you design yourself into a corner that's hard to get out of)... but most of the time it's more reliable, in the sense that usually we can come up with some design that implements all (or at least most!) of the features we want... even if maybe the result isn't very pretty.

The downside of engineering is design complexity. Complexity will probably at least scale linearly with the number of features, if not more likely by some power law. If we're lucky we may see some simplifications in the design along the way that we can refactor into, some axioms of the design that become apparent that we can incorporate... but most of the time, in practice, the design gets more and more complex as we add features.

Engineering is attractive because it gets things done. "I just want X, let's implement X". There are a lot of times when what I want is just to implement X, and I engineer a design, and it works out fine.

The axiomatic approach is more uncertain. Will it work? I don't know. It's also harder. Oops, ssyntax stopped working. Why? `some` stopped working. Why? `recstring` stopped working. Why? `+` stopped working. Why? Is it because my implementation of `apply` is broken, or because I broke the compiler and its now outputting broken code, or because my runtime is broken? It could be any of these. Another day, another week of debugging.

It's also more fun. There are many macro systems. Many of them are practical. Some have features I don't care about, some are more complicated than I like, but I don't have much interest myself in engineering yet another macro system. Axioms are more interesting. Perhaps it will turn out that for these particular set of axioms, it doesn't work out for this particular feature. But then at least I know why :-)


What a nice change! Super glad to have this.

No, you can interleave bindings and body expressions however you like, but the catch is that you can't use destructuring bindings since they look like expressions. It works like this:

  (lets) -> nil
  (lets a) -> a
  (lets a b . rest) ->
    If `a` is ssyntax or a non-symbol, we treat it as an expression:
      (do a (lets b . rest))
    Otherwise, we treat it as a variable to bind:
      (let a b (lets . rest))
The choice is almost forced in each case. It almost never makes sense to use an ssyntax symbol in a variable binding, and it almost never makes sense to discard the result of an expression that's just a variable name.

The implementation is here in Lathe's arc/utils.arc:

Current link: https://github.com/rocketnia/lathe/blob/master/arc/utils.arc

Posterity link: https://github.com/rocketnia/lathe/blob/e21a3043eb2db2333f94...


What are the semantics of `lets` exactly? Do you have to alternate binding forms with 'body' expressions?

"Isn't there a dialect out there that uses a different bracket to mean 'close all open parens'? So that the example above would become (foo (a b c (d e f]. I can't quite place the memory."

I was thinking about that and wanted to find a link earlier, but I can't find it. I hope someone will; I want to add a credit to that idea in Parendown's readme.

I seem to remember it being one of the early ideas for Arc that didn't make it to release, but maybe that's not right.

---

"I do like far better the idea of replacing withs with a '/' ssyntax for let."

I like it pretty well!

It reminds me of a `lets` macro I made in Lathe for Arc:

  (lets
    a (bind 1)
    (expr 1)
    b (bind 2)
    (expr 2)
    c (bind 3)
    (expr 3))
I kept trying to find a place this would come in handy, but the indentation always felt weird until I finally prefixed the non-binding lines as well with Parendown.

This right here

    (expr 1)
    b (bind 2)
looked like the second ( was nested inside the first, so I think at the time I tried to write it as

       (expr 1)
    b  (bind 2)
With my implementation of Penknife for JavaScript, I finally started putting /let, /if, etc. at the beginning of every line in the "block" and the indentation was easy.

With the syntax you're showing there, you at least have a punctuation character at the start of a line, which I think successfully breaks the illusion of nested parens for me.

---

It looks like I implemented `lets` because of this thread: http://arclanguage.org/item?id=11934

And would you look at that, ylando's macro (scope foo @ a b c @ d e f) is a shallow version of Parendown's macro (pd @ foo @ a b c @ d e f). XD (The `pd` macro works with any symbol, and I would usually use `/`.) I'm gonna have to add a credit to that in Parendown's readme too.

Less than a year later than that ylando thread, I started a Racket library with a macro (: foo : a b c : d e f): https://github.com/rocketnia/lathe/commit/afc713bef0163beec4...

So I suppose I have ylando to thank for illustrating the idea, as well as fallintothis for interpreting ylando's idea as an implemented macro (before ylando did, a couple of days later).


"erm, so all functions forms compiled during the eval of that expression would get named "fn-150"?"

That's what I mean, yeah. Maybe you could name them with their source code if you need to know which one it is, if it'll print names that wide. :-p This isn't any kind of long-term aspiration, just an idea to get you the information you need.


> Would it be okay to track it just on symbols

mmm, not sure. It'd probably be easier to start with a working version (even if slow) and then remove source information from lists and see if anything breaks.

> In `load`, use `read-syntax`, extract the line number from that syntax value

erm, so all functions forms compiled during the eval of that expression would get named "fn-150"?


I recall a quote -- by Paul Graham, I think -- about how one can always recognize Lisp code at a glance because all the forms flow to the right on the page/screen. How Lisp looks a lot more fluid while other languages look like they've been chiseled out of stone. Something like that. Does this ring a bell for anyone?

"A certain quirk of `withs` is that it creates several lexical scopes that are not surrounded by any parentheses. In this way, it obscures the containment relationship of those lexical scopes."

That's a good point that I hadn't considered, thanks. Yeah, I guess I'm not as opposed to obscuring containment as I'd hypothesized ^_^

Maybe my reaction to (foo /a b c /d e f) is akin to that of a more hardcore lisper when faced with indentation-based s-expressions :)

Isn't there a dialect out there that uses a different bracket to mean 'close all open parens'? So that the example above would become (foo (a b c (d e f]. I can't quite place the memory.

I do like far better the idea of replacing withs with a '/' ssyntax for let. So that this:

  (def foo ()
    /a (bind 1)
    (expr 1)
    /b (bind 2)
    (expr 2)
    /c (bind 3)
    (expr 3))
expands to this:

  (def foo ()
    (let a (bind 1)
      (expr 1)
      (let b (bind 2)
        (expr 2)
        (let c (bind 3)
          (expr 3)))))
It even feels like a feature that '/' looks kinda like 'λ'.

But I still can't stomach using '/' for arbitrary s-expressions. Maybe I will in time. I'll continue to play with it.


That does seem like a conundrum.

Where do you actually need source location information in order to get Arc function names to show up in the profiler?

Would it be okay to track it just on symbols, bypassing all this list conversion and almost all of the Arc built-ins' unwrapping steps (since not many operation have to look "inside" a symbol)?

If you do need it on cons cells, do you really need it directly on the tail cons cells of a macro body? I'd expect it to be most useful on the cons cells in functional position. If you don't need it on the tails, then it's no problem when the `apply` strips it.

Oh, you know what? How about this: In `load`, use `read-syntax`, extract the line number from that syntax value, and then use `syntax->datum` and expand like usual. While compiling that expression, turn `fn` into (let ([fn-150 (lambda ...)]) fn-150) or (procedure-rename (lambda ...) 'fn-150), replacing "150" here with whatever the source line number is. Then the `object-name` for the function will be "fn-150" and I bet it'll appear in the profiling data that way, which would at least give you the line number to work with.

If you want, and if that works, you can probably have `load` do a little bit of inspection to see if the expression is of the form (mac foo ...) or (def foo ...), which could let you create a more informative function name like `foo-150`.

There's something related to this in `ac-set1`, which generates (let ([zz ...]) zz) so that at least certain things in Arc are treated as being named "zz". Next to it is the comment "name is to cause fns to have their arc names while debugging," so "zz" was probably the Arc variable name at some point.


Post with formatting fixed:

I was curious, has anyone implemented "internal definitions", that is, a way to introduce a variable like `let`, but without having to indent the code that uses it?

For example, I may have a function like:

    (def foo ()
      (let a (...)
        (do-something a)
        (let b (... a ...)
          (do-something-else b)
          (let c (... b ...)
            etc...))))
and the indentation keeps getting deeper the more variables I define. `withs` can help, but only if I'm not doing something in between each variable definition.

In Racket, this could be written

    (define (foo)
      (define a (...))
      (do-something a)
      (define b (... a ...))
      (do-something-else b)
      (define c (... b ...))
      etc...)
In Racket, this is called "using define in an internal definition context"; it's like using `let` to define the variable (the scope of the variable extends to end of the enclosing form) except that you don't have to indent.

In Arc, I'd like to use `var`, like this:

    (def foo ()
      (var a (...))
      (do-something a)
      (var b (... a ...))
      (do-something-else b)
      (var c (... b ...))
      etc...)
In Arc 3.2, a `do` is a macro which expands into a `fn`, which is a primitive builtin form. We could swap this, so that `fn` was a macro and the function body would expand into a `do` form.

Thus, for example, `(fn () a b c)` would expand into `($fn () (do a b c))`, where `$fn` is the primitive, builtin function form.

Many macros such as `def` and `let` that have bodies expand into a `fn`s, and this change would mean that the body of these forms would also be a `do`.

For example, `(def foo () a b c)` expands (roughly) into `(assign foo (fn () a b c)`; which in turn would expand into `(assign foo ($fn () (do a b c)))`

Thus, in most places where we have a body where we might like to use `var`, there'd be a `do` in place which could implement it.

Why go to this trouble? `var` can't be a macro since a macro can only expand itself -- it doesn't have the ability to manipulate code that appears after it. However `do`, as a macro, can do whatever it wants with the code it's expanding, including looking to see if one of the expressions passed to it starts with `var`.

As macro, this could be an optional feature. Rather than being built in to the language where it was there whether you wanted it or not, if you didn't like internal definitions you wouldn't have the load the version of `do` which implemented `var`.

A further complication is what to do with lexical scope. If I write something like,

    (let var (fn (x) (prn "varnish " x)
      (var 42))
I'm clearly intending to use `var` as a variable. Similar to having lexical variables override macros, I wouldn't want `var` to become a "reserved keyword" in the sense that I now couldn't use it as a variable if I wanted to.

The Arc compiler knows of course which lexical variables have been defined at the point where a macro is being expanded. (In ac.scm the list of lexical variables is passed around in `env`). We could provide this context to macros using a Racket parameter.


> What aw wants is a more semantic change where vars `a`, `b` and `c` have the same scope. At least that's how I interpreted OP.

Just to clarify, in my original design a `var` would expand into a `let` (with the body of the let extending down to the bottom of the enclosing form), and thus the definitions wouldn't have the same scope.

Which isn't to say we couldn't do something different of course :)

Huh, I thought I had fixed the formatting in post, but apparently it didn't get saved. Too late to edit it now.


Hmm...

- I'd like to be able to use the Racket profiler.

- For the profiler to be useful, functions need to be labeled in some way so that the profiler will show which functions are which.

- To identify functions, I need to propagate source location information from an Arc source code file through to Racket, which includes propagating the source location information through Arc macros.

One option would be to create a macro system designed to propagate source location information through macros. This of course is what Racket does.

In Arc, macros are defined with list primitives (car, cdr, cons), function calls (for example, `(mac let (var val . body) ...)`), and operations that can be built out of those (such as quasiquotation).

My hypothesis is that by allowing list primitives to operate on forms labeled with source location information, we can continue to use Arc macros.

This leads to an interesting issue however...

Consider

    (apply (fn args args) xs)
this is an identity operation. For any list `xs`, this returns the same list.

In Arc 3.2, and in my implementation, an Arc function compiles into a Racket function. E.g., `(fn args args)` becomes a Racket `(lambda args args)`.

Of course we don't have to do that. If we were writing an interpreter, for example, an Arc function would compile into some function object that'd be interpreted by the host language... an Arc function wouldn't turn into something that could be called as a Racket function directly.

But, if `(fn args args)` is implemented as a Racket function `(lambda args args)`, then to call the function with some list `xs` we need to use Racket's apply. But, of course, Racket's apply takes a Racket list. So in Arc 3.2, Arc's apply calls Racket's apply after translating the Arc list into a Racket list.

Leaving out a couple of steps, what in essence we end up with in Racket is the equivalent of:

    (apply (lambda args args) (ar-nil-terminate xs))
where `ar-nil-terminate` converts an Arc list to a Racket list.

Now, for Amacx, I've invented my own representation for Arc lists. In my version, lists (that is, cons cells) can be labeled with where in a source code file they originated from. For example, if I read "(a b c)" from a file, I can inspect that list for source location information:

    > (prn x)
    (a b c)
    > (dump-srcloc x)
    foo.arc:1.0 (span 7) (a b c)
      foo.arc:1.1 (span 1) a
      foo.arc:1.3 (span 1) b
      foo.arc:1.5 (span 1) c
which shows me that the list in `x` came from a source file "foo.arc" at line 1, column 0 with a span of 7 characters; that the first element "a" was at column 1, "b" was at column 3, and so on.

This is entirely internally consistent. E.g. (cdr x) returns a value which contains both the tail of the list `(b c)` and the source location information for the sublist.

But.

In a macro,

    (mac let (var val . body)
      `(with (,var ,val) ,@body))
I'm not seeing the source location get through the rest args.

    (mac let (var val . body)
      (dump-srcloc body)
      `(with (,var ,val) ,@body))
Zilch. Nothing. Nada. `body` is a plain list, no source location information.

Why?

Because I stripped it.

    (apply (lambda args args) (ar-nil-terminate xs))
The argument to Racket's `apply` has to be a Racket list. Not my own made-up representation for lists.

    > (apply (lambda args args) xs)
    ; apply: contract violation
    ;   expected: list?
Thus my version of `ar-nil-terminate` removes source location information and returns a plain Racket list. I did this early on, because loading Arc fails quite quickly when `apply` doesn't work. I didn't realize it would mean that macros wouldn't get source location information passed to them.

So, a macro like `let` turns into the equivalent of

    (annotate 'mac
      (lambda (var val . body)
        ...))
the macro is invoked with `apply`... and there goes the source location information in `body`.

Of course, like I said, I don't have to implement an Arc rest argument with a Racket rest argument. An Arc function that took a rest argument could turn into some other kind of object where I'd pass in the rest argument myself.

But that would be slower. Probably.

I can get the profiler to work (I think), but then I'd be profiling the slower version of the code.

Though the runtime that implements the extended form of lists with source location information is slower anyway because all of Arc's builtins need to unwrap their arguments.

So maybe this is just the next step.


"What aw wants is a more semantic change where vars `a`, `b` and `c` have the same scope. At least that's how I interpreted OP."

I realize you also just said the example's bindings aren't mutually recursive, but I think if `a` and `c` "have the same scope" then `b` could depend on `c` as easily as it depends on `a`, so it seems to me we're talking about a design for mutual recursion. So what you're saying is the example could have been mutually recursive but it just happened not to be in this case, right? :)

Yeah, I think the most useful notions of "local definitions" would allow mutual recursion, just like top-level definitions do. It was only the specific, non-mutually-recursive example that led me to bring up Parendown at all.

---

The rest of this is a response to your points about Parendown. Some of your points in favor of not using Parendown are syntactic regularity, the clarity of containment relationships, and the avoidance of syntaxes that have little benefit other than saving typing. I'm a little surprised by those because they're some of the same reasons I use Parendown to begin with.

Let's compare the / reader macro with the `withs` s-expression macro.

I figure Lisp's syntactic regularity has to do with how long it takes to describe how the syntax works. Anyone can write any macro they want, but the macros that are simplest to describe will be the simplest to learn and implement, helping them propagate across Lisp dialects and thereby become part of what makes Lisp so syntactically regular in the first place.

- The / macro consumes s-expressions until it peeks ) and then it stops with a list of those s-expressions.

- The `withs` macro expects an even-length binding list of alternating variables and expressions and another expression. It returns the last expression modified by successively wrapping it lexical bindings of each variable-expression pair from that binding list, in reverse order.

Between these two, it seems to me `withs` takes more time and care to document, and hence puts more strain on a Lisp dialect's claim to syntactic regularity. (Not much more strain, but more strain than / does.)

A certain quirk of `withs` is that it creates several lexical scopes that are not surrouded by any parentheses. In this way, it obscures the containment relationship of those lexical scopes.

If we start with / and `withs` in the same language, then I think the only reason to `withs` is to save some typing.

So `withs` not only doesn't pull its weight in the language, but actively works against the language's syntactic regularity and containment relationship clarity.

For those reasons I prefer / over `withs`.

And if I don't bother to put `withs` in a language design, then whenever I need to write sequences of lexical bindings, I find / to be the clear choice for that.

Adding `withs` back into a language design isn't a big deal on its own; it's just slightly more work than not doing it, and I don't think its detriment to syntactic regularity and containment clarity are that bad. I could switch back from / to `withs` if it was just about this issue.

This is far from the only place I use / though. There are several macros like `withs` that I would need to bring back. The most essential use case -- the one I don't know an alternative for -- is CPS. If I learn of a good enough alternative to using / for CPS, and if I somehow end up preferring to maintain dozens of macros like `withs` instead of the one / macro, then I'll be out of reasons to recommend Parendown.


For what it's worth, I found this comment hit home with me:

https://news.ycombinator.com/item?id=18842902

particularly:

"Looking at it differently: to "get" Clojure one has to understand that there is no silver bullet: no single revolutionary "feature". Instead, the gains come from tiny improvements all over the place: immutable data structures, good built-in library for dealing with them, core.async, transducers, spec, Clojurescript, not having to convert to JSON or XML on the way to the client, re-using the same code on server-side and client-side, and more."


My objection is not to the precise characters you choose but rather the very idea of having to type some new character for every new expression. My personal taste (as you can see in Wart):

a) Use indentation.

b) If indentation isn't absolutely clear, just use parens.

Here's another comment that may help clarify what I mean: https://news.ycombinator.com/item?id=8503353#8507385

(I have no objection to replacing parens with say square brackets or curlies or something. I just think introducing new delimiters isn't worthwhile unless we're getting something more for it than just minimizing typing or making indentation uniform. Syntactic uniformity is Lisp's strength.)

I can certainly see how I would feel differently when programming in certain domains (like CPS). But I'm not convinced ideas that work in a specific domain translate well to this thread.

"What's that about "the impression of a single unified block"? I think I could see what you mean if the bindings were all mutually recursive like `letrec`, because then nesting them in a specific order would be meaningless; they should be in a "single unified block" without a particular order. I'm not recommending Parendown for that, only for aw's original example with nested `let`."

Here's aw's original example, with indentation:

    (def foo ()
      (var a (...))
      (do-something a)
      (var b (... a ...))
      (do-something-else b)
      (var c (... b ...))
      etc...)
The bindings aren't mutually recursive; there is definitely a specific order to them. `b` depends on `a` and `c` depends on `b`. And yet we want them in a flat list of expressions under `foo`. This was what I meant by "single unified block".

(I also like that aw is happy with parens ^_^)

Edit: Reflecting some more, another reason for my reaction is that you're obscuring containment relationships. That is inevitably a leaky abstraction; for example error messages may require knowing that `#/let` is starting a new scope. In which case I'd much rather make new scopes obvious with indentation.

What aw wants is a more semantic change where vars `a`, `b` and `c` have the same scope. At least that's how I interpreted OP.


"the '#/' is incredibly ugly, far worse than the indentation saved"

I'd love to hear lots of feedback about how well or poorly Parendown works for people. :)

I'm trying to think of what kind of feedback would actually get me to change it. :-p Basically, I added it because I found it personally helped me a lot with maintaining continuation-passing style code, and the reason I found continuation-passing style necessary was so I could implement programming languages with different kinds of side effects. There are certainly other techniques for managing continuation-passing style, side effects, and language implementation, and I have some in mind to implement in Racket that I just haven't gotten to yet. Maybe some combination of techniques will take Parendown's place.

---

If it's just the #/ you object to, yeah, my preferred syntax would be / without the # in front. I use / because it looks like a tilted paren, making the kind of zig-zag that resembles how the code would look if it used traditional parens:

  (
    (
      (
  /
  /
  /
This just isn't so seamless to do in Racket because Racket already uses / in many common identifiers.

Do you think I should switch Parendown to use / like I really want to do? :) Is there perhaps another character you think would be better?

---

What's that about "the impression of a single unified block"? I think I could see what you mean if the bindings were all mutually recursive like `letrec`, because then nesting them in a specific order would be meaningless; they should be in a "single unified block" without a particular order. I'm not recommending Parendown for that, only for aw's original example with nested `let`. Even in Parendown-style Racket, I use local definitions for the sake of mutual recursion, like this:

  /...
  /let ()
    (define ...)
    (define ...)
    (define ...)
  /...

I'm pretty sure Clojure is geared towards being practical over innovative. Or at least that's how I see it. When I think of Clojure I think in terms of its stability, reach, accessibility, correctness, and the thousands of small, seemingly insignificant, design decisions that it got right. All of this adds up to Clojure being a great user experience that has garnered a really good community. It truly is the only lisp dialect that large organizations, outside of academia, should take seriously.

edit - just to bring home the point. I spent the last number of years on a project developing a significant Clojure code base with many libraries. And for quite a while (the last 2 years) I had been avoiding upgrading my version of Clojure, but when Clojure 1.10.0 came out a few weeks back I decided to do it. I was worried, but I only hit 2 warnings which took 15 minutes to fix. I'm telling you that's pretty impressive from a stability point of view.


I’m currently working to see if I can get profiling going...

At the moment, Amacx loads arc.arc unpleasantly slowly. And I don’t even have all of arc.arc included yet.

I imagine it’s probably because the compiler is slow. (What else could it be?)

And I imagine there’s a good chance the compiler is slow because as the compiler recursively works its way down into forms being compiled, I functionally extend the compilation context in a simplistic way:

    (def functional-extend (g nk nv)
      (fn (k)
        (if (is k nk) nv (g k))))
But it would be nice to be able to measure it, instead of just guessing.

I hope I can use the Racket profiler, if I can propagate function names and/or function source locations from Arc to Racket.

At the moment a typical profile trace looks like:

                                    for-loop [72]                                          5.1%
                                    ??? [1]                                               38.9%
                                    ??? [12]                                              56.0%
     [12] 1456(30.9%)  104(2.2%)  ??? /Users/andrew/w/amacx/arc/runtime.rkt:409:9
                                    ??? [12]                                              56.0%
                                    ??? [1]                                               42.2%
i.e. the profiler knows that functions are being called, but has no way to identify them because all such identifying information has been lost by the time Arc code (e.g. implementing the compiler) has been compiled to Racket.

Today I ran into a weird issue. I was running the Racket profiler in DrRacket, and it worked at first, but then started to hang. So I closed DrRacket and restarted it, deleted all my `compiled` directories, reverted my source code back to an earlier version… and it still hung.

So I don’t know why it worked at first or why it stopped working.

But it turns out the profiler works well from the command line, so I’m doing that instead.


"While.. indentation reduction is one of the most important.. that benefit is less important once Parendown is around."

One counter-argument that short-circuits this line of reasoning for me: the '#/' is incredibly ugly, far worse than the indentation saved :) Even if that's just my opinion, something to get used to, the goal with combining defining forms is to give the impression of a single unified block. Having to add such connectors destroys the illusion. (Though, hmm, I wouldn't care as much about a trailing ';' connector. Maybe this is an inconsistency in my thinking.)


"`var` can't be a macro since a macro can only expand itself -- it doesn't have the ability to manipulate code that appears after it"

It can't be a regular Arc macro, but Arc comes with at least two things I'd call macro systems already: `mac` of course, but also `defset`. Besides those, I keep thinking of Arc's destructuring syntax and reader syntax as macro systems, but that's just because they could easily be extended to be.

I would say it makes sense to add another macro system for local definitions. As a start, it could be a system where the code (do (var a 1) (var b 2) (+ a b)) would expand by calling the `var` macro with both the list (a 2) and the list ((var b 2) (+ a b)).

But I think that's an incomplete design. All it does is help with indentation, and I feel like a simpler solution for indentation is to have a "weak opening paren" syntactic sugar where (a b #/c d) means (a b (c d)):

  (def foo ()
    (let a (...)
    #/do (do-something a)
    #/let b (... a ...)
    #/do (do-something-else b)
    #/let c (... b ...)
      etc...))
The "#/" notation is the syntax of my Parendown library for Racket (and it's based on my use of "/" in Cene). I recently wrote some extensive examples in the Parendown readme: https://github.com/lathe/parendown-for-racket/blob/master/RE...

(For posterity, here's a link specifically to the version of that readme at the time of this post: https://github.com/lathe/parendown-for-racket/blob/23526f8f5...)

While I think indentation reduction is one of the most important features internal definition contexts have, that benefit is less important once Parendown is around. I think their remaining benefit is that they make it easy to move code between the top level and local scopes without modifying it. I made use of this property pretty often in JavaScript; I would maintain a programming language runtime as a bunch of top-level JavaScript code and then have the compiler wrap it in a function body as it bundled that runtime together with the compiled code.

In Racket, getting the local definitions and module-level definitions to look alike means giving them the same support for making macro definitions and value definitions at every phase level, with the same kind of mutual recursion support.

They go to a lot of trouble to make this work in Racket, which I think ends up making mutual recursion into one of the primary reasons to use local definitions. They use partial expansion to figure out which variables are shadowed in this local scope before they proceed with a full expansion, and they track the set of scope boundaries that surround each macro-generated identifier so that those identifiers can be matched up to just the right local variable bindings.

The Arc top level works much differently than the Racket module level. Arc has no hygiene, and far from having a phase distinction, Arc's top level alternates between compiling and running each expression. To maintain the illusion that the top level is just another local scope, almost all local definitions would have to replicate that same alternation between run time and compile time, meaning that basically the whole set of local definitions would run at compile time. So local definitions usually would not be able to depend on local variables at all. We would have to coin new syntaxes like `loc=` and `locdef` for things that should interact with local variables, rather than using `=` and `def`.

Hm, funny.... If we think of a whole Arc file as being just another local scope, then it's all executed at "compile time," and only its `loc=` and `locdef` code is deferred to "run time," whenever that is. This would be a very seamless way to use Arc code to create a compilation artifact. :)

More