Arc Forumnew | comments | leaders | submit | rocketnia's commentslogin

That does seem like a conundrum.

Where do you actually need source location information in order to get Arc function names to show up in the profiler?

Would it be okay to track it just on symbols, bypassing all this list conversion and almost all of the Arc built-ins' unwrapping steps (since not many operation have to look "inside" a symbol)?

If you do need it on cons cells, do you really need it directly on the tail cons cells of a macro body? I'd expect it to be most useful on the cons cells in functional position. If you don't need it on the tails, then it's no problem when the `apply` strips it.

Oh, you know what? How about this: In `load`, use `read-syntax`, extract the line number from that syntax value, and then use `syntax->datum` and expand like usual. While compiling that expression, turn `fn` into (let ([fn-150 (lambda ...)]) fn-150) or (procedure-rename (lambda ...) 'fn-150), replacing "150" here with whatever the source line number is. Then the `object-name` for the function will be "fn-150" and I bet it'll appear in the profiling data that way, which would at least give you the line number to work with.

If you want, and if that works, you can probably have `load` do a little bit of inspection to see if the expression is of the form (mac foo ...) or (def foo ...), which could let you create a more informative function name like `foo-150`.

There's something related to this in `ac-set1`, which generates (let ([zz ...]) zz) so that at least certain things in Arc are treated as being named "zz". Next to it is the comment "name is to cause fns to have their arc names while debugging," so "zz" was probably the Arc variable name at some point.

reply

3 points by aw 9 days ago | link

> Would it be okay to track it just on symbols

mmm, not sure. It'd probably be easier to start with a working version (even if slow) and then remove source information from lists and see if anything breaks.

> In `load`, use `read-syntax`, extract the line number from that syntax value

erm, so all functions forms compiled during the eval of that expression would get named "fn-150"?

reply

3 points by rocketnia 8 days ago | link

"erm, so all functions forms compiled during the eval of that expression would get named "fn-150"?"

That's what I mean, yeah. Maybe you could name them with their source code if you need to know which one it is, if it'll print names that wide. :-p This isn't any kind of long-term aspiration, just an idea to get you the information you need.

reply


"`var` can't be a macro since a macro can only expand itself -- it doesn't have the ability to manipulate code that appears after it"

It can't be a regular Arc macro, but Arc comes with at least two things I'd call macro systems already: `mac` of course, but also `defset`. Besides those, I keep thinking of Arc's destructuring syntax and reader syntax as macro systems, but that's just because they could easily be extended to be.

I would say it makes sense to add another macro system for local definitions. As a start, it could be a system where the code (do (var a 1) (var b 2) (+ a b)) would expand by calling the `var` macro with both the list (a 2) and the list ((var b 2) (+ a b)).

But I think that's an incomplete design. All it does is help with indentation, and I feel like a simpler solution for indentation is to have a "weak opening paren" syntactic sugar where (a b #/c d) means (a b (c d)):

  (def foo ()
    (let a (...)
    #/do (do-something a)
    #/let b (... a ...)
    #/do (do-something-else b)
    #/let c (... b ...)
      etc...))
The "#/" notation is the syntax of my Parendown library for Racket (and it's based on my use of "/" in Cene). I recently wrote some extensive examples in the Parendown readme: https://github.com/lathe/parendown-for-racket/blob/master/RE...

(For posterity, here's a link specifically to the version of that readme at the time of this post: https://github.com/lathe/parendown-for-racket/blob/23526f8f5...)

While I think indentation reduction is one of the most important features internal definition contexts have, that benefit is less important once Parendown is around. I think their remaining benefit is that they make it easy to move code between the top level and local scopes without modifying it. I made use of this property pretty often in JavaScript; I would maintain a programming language runtime as a bunch of top-level JavaScript code and then have the compiler wrap it in a function body as it bundled that runtime together with the compiled code.

In Racket, getting the local definitions and module-level definitions to look alike means giving them the same support for making macro definitions and value definitions at every phase level, with the same kind of mutual recursion support.

They go to a lot of trouble to make this work in Racket, which I think ends up making mutual recursion into one of the primary reasons to use local definitions. They use partial expansion to figure out which variables are shadowed in this local scope before they proceed with a full expansion, and they track the set of scope boundaries that surround each macro-generated identifier so that those identifiers can be matched up to just the right local variable bindings.

The Arc top level works much differently than the Racket module level. Arc has no hygiene, and far from having a phase distinction, Arc's top level alternates between compiling and running each expression. To maintain the illusion that the top level is just another local scope, almost all local definitions would have to replicate that same alternation between run time and compile time, meaning that basically the whole set of local definitions would run at compile time. So local definitions usually would not be able to depend on local variables at all. We would have to coin new syntaxes like `loc=` and `locdef` for things that should interact with local variables, rather than using `=` and `def`.

Hm, funny.... If we think of a whole Arc file as being just another local scope, then it's all executed at "compile time," and only its `loc=` and `locdef` code is deferred to "run time," whenever that is. This would be a very seamless way to use Arc code to create a compilation artifact. :)

reply

1 point by akkartik 11 days ago | link

"While.. indentation reduction is one of the most important.. that benefit is less important once Parendown is around."

One counter-argument that short-circuits this line of reasoning for me: the '#/' is incredibly ugly, far worse than the indentation saved :) Even if that's just my opinion, something to get used to, the goal with combining defining forms is to give the impression of a single unified block. Having to add such connectors destroys the illusion. (Though, hmm, I wouldn't care as much about a trailing ';' connector. Maybe this is an inconsistency in my thinking.)

reply

2 points by rocketnia 10 days ago | link

"the '#/' is incredibly ugly, far worse than the indentation saved"

I'd love to hear lots of feedback about how well or poorly Parendown works for people. :)

I'm trying to think of what kind of feedback would actually get me to change it. :-p Basically, I added it because I found it personally helped me a lot with maintaining continuation-passing style code, and the reason I found continuation-passing style necessary was so I could implement programming languages with different kinds of side effects. There are certainly other techniques for managing continuation-passing style, side effects, and language implementation, and I have some in mind to implement in Racket that I just haven't gotten to yet. Maybe some combination of techniques will take Parendown's place.

---

If it's just the #/ you object to, yeah, my preferred syntax would be / without the # in front. I use / because it looks like a tilted paren, making the kind of zig-zag that resembles how the code would look if it used traditional parens:

  (
    (
      (
  /
  /
  /
This just isn't so seamless to do in Racket because Racket already uses / in many common identifiers.

Do you think I should switch Parendown to use / like I really want to do? :) Is there perhaps another character you think would be better?

---

What's that about "the impression of a single unified block"? I think I could see what you mean if the bindings were all mutually recursive like `letrec`, because then nesting them in a specific order would be meaningless; they should be in a "single unified block" without a particular order. I'm not recommending Parendown for that, only for aw's original example with nested `let`. Even in Parendown-style Racket, I use local definitions for the sake of mutual recursion, like this:

  /...
  /let ()
    (define ...)
    (define ...)
    (define ...)
  /...

reply

1 point by akkartik 10 days ago | link

My objection is not to the precise characters you choose but rather the very idea of having to type some new character for every new expression. My personal taste (as you can see in Wart):

a) Use indentation.

b) If indentation isn't absolutely clear, just use parens.

Here's another comment that may help clarify what I mean: https://news.ycombinator.com/item?id=8503353#8507385

(I have no objection to replacing parens with say square brackets or curlies or something. I just think introducing new delimiters isn't worthwhile unless we're getting something more for it than just minimizing typing or making indentation uniform. Syntactic uniformity is Lisp's strength.)

I can certainly see how I would feel differently when programming in certain domains (like CPS). But I'm not convinced ideas that work in a specific domain translate well to this thread.

"What's that about "the impression of a single unified block"? I think I could see what you mean if the bindings were all mutually recursive like `letrec`, because then nesting them in a specific order would be meaningless; they should be in a "single unified block" without a particular order. I'm not recommending Parendown for that, only for aw's original example with nested `let`."

Here's aw's original example, with indentation:

    (def foo ()
      (var a (...))
      (do-something a)
      (var b (... a ...))
      (do-something-else b)
      (var c (... b ...))
      etc...)
The bindings aren't mutually recursive; there is definitely a specific order to them. `b` depends on `a` and `c` depends on `b`. And yet we want them in a flat list of expressions under `foo`. This was what I meant by "single unified block".

(I also like that aw is happy with parens ^_^)

Edit: Reflecting some more, another reason for my reaction is that you're obscuring containment relationships. That is inevitably a leaky abstraction; for example error messages may require knowing that `#/let` is starting a new scope. In which case I'd much rather make new scopes obvious with indentation.

What aw wants is a more semantic change where vars `a`, `b` and `c` have the same scope. At least that's how I interpreted OP.

reply

2 points by rocketnia 9 days ago | link

"What aw wants is a more semantic change where vars `a`, `b` and `c` have the same scope. At least that's how I interpreted OP."

I realize you also just said the example's bindings aren't mutually recursive, but I think if `a` and `c` "have the same scope" then `b` could depend on `c` as easily as it depends on `a`, so it seems to me we're talking about a design for mutual recursion. So what you're saying is the example could have been mutually recursive but it just happened not to be in this case, right? :)

Yeah, I think the most useful notions of "local definitions" would allow mutual recursion, just like top-level definitions do. It was only the specific, non-mutually-recursive example that led me to bring up Parendown at all.

---

The rest of this is a response to your points about Parendown. Some of your points in favor of not using Parendown are syntactic regularity, the clarity of containment relationships, and the avoidance of syntaxes that have little benefit other than saving typing. I'm a little surprised by those because they're some of the same reasons I use Parendown to begin with.

Let's compare the / reader macro with the `withs` s-expression macro.

I figure Lisp's syntactic regularity has to do with how long it takes to describe how the syntax works. Anyone can write any macro they want, but the macros that are simplest to describe will be the simplest to learn and implement, helping them propagate across Lisp dialects and thereby become part of what makes Lisp so syntactically regular in the first place.

- The / macro consumes s-expressions until it peeks ) and then it stops with a list of those s-expressions.

- The `withs` macro expects an even-length binding list of alternating variables and expressions and another expression. It returns the last expression modified by successively wrapping it lexical bindings of each variable-expression pair from that binding list, in reverse order.

Between these two, it seems to me `withs` takes more time and care to document, and hence puts more strain on a Lisp dialect's claim to syntactic regularity. (Not much more strain, but more strain than / does.)

A certain quirk of `withs` is that it creates several lexical scopes that are not surrouded by any parentheses. In this way, it obscures the containment relationship of those lexical scopes.

If we start with / and `withs` in the same language, then I think the only reason to `withs` is to save some typing.

So `withs` not only doesn't pull its weight in the language, but actively works against the language's syntactic regularity and containment relationship clarity.

For those reasons I prefer / over `withs`.

And if I don't bother to put `withs` in a language design, then whenever I need to write sequences of lexical bindings, I find / to be the clear choice for that.

Adding `withs` back into a language design isn't a big deal on its own; it's just slightly more work than not doing it, and I don't think its detriment to syntactic regularity and containment clarity are that bad. I could switch back from / to `withs` if it was just about this issue.

This is far from the only place I use / though. There are several macros like `withs` that I would need to bring back. The most essential use case -- the one I don't know an alternative for -- is CPS. If I learn of a good enough alternative to using / for CPS, and if I somehow end up preferring to maintain dozens of macros like `withs` instead of the one / macro, then I'll be out of reasons to recommend Parendown.

reply

2 points by akkartik 9 days ago | link

"A certain quirk of `withs` is that it creates several lexical scopes that are not surrounded by any parentheses. In this way, it obscures the containment relationship of those lexical scopes."

That's a good point that I hadn't considered, thanks. Yeah, I guess I'm not as opposed to obscuring containment as I'd hypothesized ^_^

Maybe my reaction to (foo /a b c /d e f) is akin to that of a more hardcore lisper when faced with indentation-based s-expressions :)

Isn't there a dialect out there that uses a different bracket to mean 'close all open parens'? So that the example above would become (foo (a b c (d e f]. I can't quite place the memory.

I do like far better the idea of replacing withs with a '/' ssyntax for let. So that this:

  (def foo ()
    /a (bind 1)
    (expr 1)
    /b (bind 2)
    (expr 2)
    /c (bind 3)
    (expr 3))
expands to this:

  (def foo ()
    (let a (bind 1)
      (expr 1)
      (let b (bind 2)
        (expr 2)
        (let c (bind 3)
          (expr 3)))))
It even feels like a feature that '/' looks kinda like 'λ'.

But I still can't stomach using '/' for arbitrary s-expressions. Maybe I will in time. I'll continue to play with it.

reply

2 points by rocketnia 8 days ago | link

"Isn't there a dialect out there that uses a different bracket to mean 'close all open parens'? So that the example above would become (foo (a b c (d e f]. I can't quite place the memory."

I was thinking about that and wanted to find a link earlier, but I can't find it. I hope someone will; I want to add a credit to that idea in Parendown's readme.

I seem to remember it being one of the early ideas for Arc that didn't make it to release, but maybe that's not right.

---

"I do like far better the idea of replacing withs with a '/' ssyntax for let."

I like it pretty well!

It reminds me of a `lets` macro I made in Lathe for Arc:

  (lets
    a (bind 1)
    (expr 1)
    b (bind 2)
    (expr 2)
    c (bind 3)
    (expr 3))
I kept trying to find a place this would come in handy, but the indentation always felt weird until I finally prefixed the non-binding lines as well with Parendown.

This right here

    (expr 1)
    b (bind 2)
looked like the second ( was nested inside the first, so I think at the time I tried to write it as

       (expr 1)
    b  (bind 2)
With my implementation of Penknife for JavaScript, I finally started putting /let, /if, etc. at the beginning of every line in the "block" and the indentation was easy.

With the syntax you're showing there, you at least have a punctuation character at the start of a line, which I think successfully breaks the illusion of nested parens for me.

---

It looks like I implemented `lets` because of this thread: http://arclanguage.org/item?id=11934

And would you look at that, ylando's macro (scope foo @ a b c @ d e f) is a shallow version of Parendown's macro (pd @ foo @ a b c @ d e f). XD (The `pd` macro works with any symbol, and I would usually use `/`.) I'm gonna have to add a credit to that in Parendown's readme too.

Less than a year later than that ylando thread, I started a Racket library with a macro (: foo : a b c : d e f): https://github.com/rocketnia/lathe/commit/afc713bef0163beec4...

So I suppose I have ylando to thank for illustrating the idea, as well as fallintothis for interpreting ylando's idea as an implemented macro (before ylando did, a couple of days later).

reply

2 points by rocketnia 3 days ago | link

"Isn't there a dialect out there that uses a different bracket to mean 'close all open parens'? So that the example above would become (foo (a b c (d e f]. I can't quite place the memory."

Oh, apparently it's Interlisp! They call the ] a super-parenthesis.

http://bitsavers.trailing-edge.com/pdf/xerox/interlisp/Inter...

  The INTERLISP read program treats square brackets as 'super-parentheses': a
  right square bracket automatically supplies enough right parentheses to match
  back to the last left square bracket (in the expression being read), or if none
  has appeared, to match the first left parentheses,
  e.g.,    (A (B (C]=(A (B (C))),
           (A [B (C (D] E)=(A (B (C (D))) E).
Here's a document which goes over a variety of different notations (although the fact they say "there is no opening super-parenthesis in Lisp" seems to be inaccurate considering the above):

http://www.linguistics.fi/julkaisut/SKY2006_1/2.6.9.%20YLI-J...

They favor this approach, which is also the one that best matches the way I intend for Parendown to work:

"Krauwer and des Tombe (1981) proposed _condensed labelled bracketing_ that can be defined as follows. Special brackets (here we use angle brackets) mark those initial and final branches that allow an omission of a bracket on one side in their realized markup. The omission is possible on the side where a normal bracket (square bracket) indicates, as a side-effect, the boundary of the phrase covered by the branch. For example, bracketing "[[A B] [C [D]]]" can be replaced with "[A B〉 〈C 〈D]" using this approach."

That approach includes what I would call a weak closing paren, 〉, but I've consciously left this out of Parendown. It isn't nearly as useful in a Lispy language (where lists usually begin with operator symbols, not lists), and the easiest way to add it in a left-to-right reader macro system like Racket's would be to replace the existing open paren syntax to anticipate and process these weak closing parens, rather than non-invasively extending Racket's syntax with one more macro.

reply

1 point by akkartik 8 days ago | link

What are the semantics of `lets` exactly? Do you have to alternate binding forms with 'body' expressions?

reply

2 points by rocketnia 8 days ago | link

No, you can interleave bindings and body expressions however you like, but the catch is that you can't use destructuring bindings since they look like expressions. It works like this:

  (lets) -> nil
  (lets a) -> a
  (lets a b . rest) ->
    If `a` is ssyntax or a non-symbol, we treat it as an expression:
      (do a (lets b . rest))
    Otherwise, we treat it as a variable to bind:
      (let a b (lets . rest))
The choice is almost forced in each case. It almost never makes sense to use an ssyntax symbol in a variable binding, and it almost never makes sense to discard the result of an expression that's just a variable name.

The implementation is here in Lathe's arc/utils.arc:

Current link: https://github.com/rocketnia/lathe/blob/master/arc/utils.arc

Posterity link: https://github.com/rocketnia/lathe/blob/e21a3043eb2db2333f94...

reply

3 points by aw 9 days ago | link

> What aw wants is a more semantic change where vars `a`, `b` and `c` have the same scope. At least that's how I interpreted OP.

Just to clarify, in my original design a `var` would expand into a `let` (with the body of the let extending down to the bottom of the enclosing form), and thus the definitions wouldn't have the same scope.

Which isn't to say we couldn't do something different of course :)

Huh, I thought I had fixed the formatting in post, but apparently it didn't get saved. Too late to edit it now.

reply


Yeah, that 1830 is the number of characters since the start of the stream, like you say. (I think it's specifically 1 plus the number of bytes, as documented at [1].)

Racket errors tend to report whatever location information they can, and Racket ports track four things: The name of the file, the line number, the column number, and the overall position in the stream. But line-counting isn't enabled by default on Racket streams, so the error you see doesn't contain the line number information.

I just pushed a commit[2] to Anarki that enables line-counting on the stream when an Arc files is loaded, which should mean you can now see the line and column of a reader error. A reader error on line 17, column 6 should now be printed like so:

  sample-file.arc:17:6: read: expected a closing "
[1] (https://docs.racket-lang.org/reference/linecol.html#(def._((...)

[2] https://github.com/arclanguage/anarki/commit/208620eca083650...

reply

4 points by zck 7 days ago | link

What a nice change! Super glad to have this.

reply

4 points by kinnard 12 days ago | link

Wow.

reply

3 points by rocketnia 14 days ago | link | parent | on: Lexically scoped macros?

A situation where Common Lisp's `macrolet` or Racket's `let-syntax` would be handy came up here a few months ago. I dropped in with some thoughts on how we could add it to Arc: http://arclanguage.org/item?id=20561

I think what I describe there pretty much meshes with what you and waterhouse are describing here. :)

---

"And then "anarki-test" would be evaluated at compile time? Hmm..."

Yeah, I think basically every instance of local macros I've seen involves evaluating the expression at compile time.

That's even what Racket's `let-syntax` does, although in Racket's case it involves a little more detail since Racket enforces a strict separation between compile-time and run-time side effects. When Racket evaluates the expression at compile time (usually, in phase level 1) it first expands that expression in the phase level corresponding to the compile time of the compile time (phase level 2), and if that expression contains another `let-syntax`, then it starts expanding an expression in phase level 3 and so on.

Arc evaluates and expands everything in one phase, as far as Racket is concerned. It would make sense for `lexical-macro` to do its evaluation in the same phase too.

I notice Common Lisp's `macrolet` allows[1] inner `macrolet` macros to depend on outer ones, like this:

  (lexical-macro (foo) ''world
    (lexical-macro (bar) `(sym:+ "hello-" ,(foo))
      (bar)))
  ; could return the symbol 'hello-world
To support that, when `lexical-macro` evaluates the expression, it should expand that expression in the same macro binding environment the `lexical-macro` call was expanded in.

[1] http://www.lispworks.com/documentation/HyperSpec/Body/s_flet...

---

Here's a different take on the local macro concept by almkglor 10 years ago: http://arclanguage.org/item?id=3085

That implementation, `macwith`, is still around on Anarki's arc2.master branch, in the file lib/macrolet.arc: https://github.com/arclanguage/anarki/blob/af5b756e983807ba6...

It works by traversing the body s-expressions and expanding all occurrences it finds, leaving anything else alone. I think this means it tends to break if the code ever contains a list that looks like a call to that macro but isn't supposed to be, such as a quoted occurrence like '(my-macro), a binding occurrence like (fn (my-macro) ...), or even another `macwith` binding occurrence of the same macro name.

I don't prefer this to the other approach we're talking about, since after all it's within easy reach of the macro system to support local macros itself rather than with an error-prone code-walker like this.

However, `macwith` is a macro that makes sense in its own right. It's easy to work around many of the places the code walker runs into false positives, and if `macwith` came with support for an escape sequence, there would be easy workarounds in many other cases too.

If I tried to propose a particular escape sequence design for `macwith` right here and now, I could be here for a while. I've been working for two years to make a macro system suitable for factoring out escape sequence syntaxes into libraries, and my favorite designs for `macwith` escape sequences would be the ones that solved all the same problems I'm building that system for.

reply

3 points by rocketnia 27 days ago | link | parent | on: Bugs and failures

In Lathe and in the first version of Penknife (written in Arc), I was calling this kind of feature "failcall." A function could be called with `failcall` to handle its failures, or it could be called normally, in which case its failures would be promoted to errors automatically.

Your example of using Racket parameters leads to a slight difference in behavior from what I would want. Suppose the code in the body contains a call to some function that in turn makes a normal call to another function which fails. With the Racket parameter technique you talk about, the parameter binding would still be in scope at that point, so the failure would be caught, even though I think the author of that normal function call would have expected its unhandled failures to be promoted to bugs.

I remember thinking Racket parameters would be useful, but the technique I ended up with didn't use them at all. There's a full-featured implementation in the Lathe arc/ folder's failcall.arc[1], but here's a short proof of concept for Anarki:

  ; In this example, a "failfn" is a tagged single-argument function
  ; that returns (list t <success-val>) or (list nil <failure-val>).
  (mac failfn (x . body)
    `(annotate 'failfn (fn (,x) ,@body)))
  
  ; To call a function in a way which handles failures, we pass in an
  ; argument and an `on-fail` handler like so. This can be used with
  ; normal functions too, which just never fail.
  (def failcall (f x on-fail)
    (if (isa f 'failfn)
      (let (succeeded val) rep.f.x
        (if succeeded
          val
          (on-fail val)))
      f.x))
  
  ; When a failfn is called normally, it behaves as though it was
  ; failcalled with a handler that always produces an error.
  (defcall failfn (f x)
    (failcall f x
      (fn (failure-val)
        (err:+ "Failed with " (tostring:write failure-val)))))
  
  ; We define an example failfn. We can't use `def` for this since it
  ; defines a normal function.
  (= failure-prone-sqrt
    (failfn x
      (if (< x 0)
        (list nil "Tried to take the square root of a negative number")
        (list t (sqrt x)))))
  
  
  
  arc> (failure-prone-sqrt 4)
  2
  arc> (failure-prone-sqrt -4)
  Failed with "Tried to take the square root of a negative number"
    context...:
     /path/to/anarki/ac.rkt:1327:4
  
  arc> (failcall failure-prone-sqrt 4 idfn)
  2
  arc> (failcall failure-prone-sqrt -4 idfn)
  "Tried to take the square root of a negative number"
  arc> (failcall sqrt 4 idfn)
  2
  arc> (failcall sqrt -4 idfn)
  0+2i
The REPL transcript shows me calling a failfn using a normal call, calling a failfn using a failcall, and calling a normal function using a failcall. The only case that causes an actual error is when the failfn fails and there was no handler to catch it.

Obviously, a more full-featured approach would allow failcalls of arity other than one. And this `failcall` syntax doesn't have the convenient kind of pattern-matching syntax your `onfail` macro does, but that kind of thing could be built as a layer over the top of this example; I'm just keeping the example small.

Racket is just as capable of this technique as Anarki is. Instead of an `annotate` tagged value, the Racket version would use a struct, and instead of `defcall`, it would use the `prop:procedure` structure type property.

[1] https://github.com/rocketnia/lathe/blob/7127cec31a9e97d27512...

---

As far as making core dumps goes, I've never tried this, but it looks like `gdbdump` might be able to do it for Racket programs on Linux.[2] There's also a Racket built-in called `dump-memory-stats`,[3] which at least in Racket 7.0 appears to give a summary of how many objects of certain kinds are in memory.

[2] https://docs.racket-lang.org/gdbdump/index.html

[3] (https://docs.racket-lang.org/reference/garbagecollection.htm...)

reply

3 points by aw 27 days ago | link

That's insanely clever to define a callable custom type to handle the case of calling the failfn without a fail handler. I'm impressed.

reply

2 points by rocketnia 58 days ago | link | parent | on: Ask: Why is there no "save-list"?

Even if you merge `writefile` and `save-table` like this, but not `readfile1` and `read-table`, then people still need to know, at development time, what type of data is in the file in order to read it, so they might as well use a type-specific way to write it as well. Unfortunately, merging `readfile1` and `read-table` isn't really possible, since their serialized representations overlap; they can't reconstitute information that was never written to the file to begin with.

From a bigger-picture point of view, this seems like it would become a non-issue once Arc had its own reader. I assume the problem with reading tables using `read` is that Racket's reader constructs immutable hashes. An Arc-specific reader would naturally construct Arc's mutable tables instead.

Doesn't Racket's reader give us similar problems in that it reads immutable strings and cons cells, too? So these problems could all be approached as a single project.

In the short term, it's not a project that would need a whole new reader. It could just be an adaptation of Racket's existing reader... something like this:

  (define (correcting-arc-read in)
    (let loop ([result (read in)])
      (match result
        
        ; TODO: See if this should construct a Racket mutable cons cell
        ; instead (`mcons`). Right now this just creates an immutable
        ; one, which should be fine since Arc uses an unsafe technique
        ; to mutate those.
        [(cons a b) (cons (loop a) (loop b))]
        
        [ (? hash?)
          (make-hash
            (map
              (match-lambda [(cons k v) (cons (loop k) (loop v))])
              (hash->list result)))]
        [ (? string?)
          ; We construct a new mutable string with the same content as
          ; `result`.
          (substring result 0)]
        ; We handle tagged values, which are represented as mutable
        ; Racket vectors.
        [(? vector?) (list->vector (map loop (vector->list result)))]
        
        ; We handle various atomic values. (TODO: Add more of these
        ; cases until we've accounted for every writable type Arc
        ; supports. Alternatively, just make this a catch-all
        ; `[_ result]`.)
        [(? number?) result]
        [(? symbol?) result])))
Writing Arc's `queue` type might be tricky, since that representation relies on sharing. It's possible queues (and other tagged values in general) should have a customized read and write behavior.

reply

3 points by hjek 58 days ago | link

> I assume the problem with reading tables using `read` is that Racket's reader constructs immutable hashes.

Racket also has mutable hashes created using `make-hash`[0] rather than `hash`. It could just be that the tables are not serialised as something Racket reads as mutable hashes when deserialising it back again?

[0]: https://docs.racket-lang.org/reference/hashtables.html#%28de...

reply

3 points by rocketnia 58 days ago | link

"It could just be that the tables are not serialised as something Racket reads as mutable hashes when deserialising it back again?"

I'm pretty sure the Racket reader never reads a mutable hash, but that it's possible for a custom reader extension to do it.

Some of Racket's approach to module encapsulation depends on syntax objects being deeply immutable. In particular, a module can export a macro that expands to (set! my-private-module-binding 20) but which uses `syntax-protect` so that the client of that macro can't use the `my-private-module-binding` identifier for any other purpose. If the lists constituting a program's syntax were usually mutable, then it would be hard to stop the client from just mutating that expansion to make something like (list my-private-module-binding 20), giving it access to bindings that were meant to be private.

I think this is why Racket's `read-syntax` creates immutable data. As for why `read` does it too, I think it's just a case of `read` being a relatively unused feature in Racket. They don't have many use cases for `read` that they wouldn't rather use `read-syntax` for, so they don't usually have reasons for the behavior of `read` to diverge from the behavior of `read-syntax`.

All this being said, they could pretty easily add built-in syntaxes for mutable hashes, but I think it just hasn't come up. Who ever really wants to read a mutable value? In Racket, where immutable values are well-supported by the core libraries, you aren't gonna need it. In the rare case you want it, it's easy enough to do a deep traversal to build the mutable structure you're looking for (like my example `correcting-arc-read` does).

It only comes up as a particular problem in Arc. Arc's language design doesn't account for the existence of immutable values at all, so working around them when they appear can be a bit quirky.

reply

3 points by hjek 58 days ago | link

> I'm pretty sure the Racket reader never reads a mutable hash, but that it's possible for a custom reader extension to do it.

No..?

    > (require racket/serialize)
    > (define basket (make-hash))
    > (hash-set! basket 'fruit 'apple)
    > (write-to-file (serialize basket) "basket.txt")
    > (define bucket (deserialize (file->value "basket.txt")))
    > (hash-set! bucket 'fruit 'banana)
    > bucket
    '#hash((fruit . banana))

reply

4 points by rocketnia 58 days ago | link

The Racket reader reads a seven-element list there, not a mutable hash.

Looks like you're proposing to use a two-stage serialization format. One stage is `read` and `write`, and the other is `serialize` and `deserialize`. At the level of language design, what's the point of designing it this way? (Why does Racket design it this way, anyway?)

I can see not wanting to have cyclic syntax or syntax-with-sharing in the core language just because it's a pain in the neck to parse and another pain in the neck to interpret. Maybe that's reason enough to have a separate `racket/serialize` library.

But isn't the main issue here that Arc's `read` creates immutable tables when the rest of the language only deals with mutable ones? The mutable tables go through `write` just fine, but `read` doesn't read the same kind of value that was written out. If and when this situation is improved, I don't see where `racket/serialize` would come into play.

reply

2 points by hjek 57 days ago | link

> Looks like you're proposing to use a two-stage serialization format.

I wasn't really proposing anything, only pointing out that it's not the case that Racket never can read mutable hashes, and then illustrating that with a code example.

reply

3 points by rocketnia 56 days ago | link

Hmm, all right. I didn't want to believe that was the point you were trying to make. In that case, I think I must not have conveyed anything very clearly in the `correcting-arc-read` comment.

I've been trying to respond to this, which was your response to that one:

"Racket also has mutable hashes created using `make-hash` rather than `hash`. It could just be that the tables are not serialised as something Racket reads as mutable hashes when deserialising it back again?"

In the `correcting-arc-read` comment, I used `make-hash` in the implementation of `correcting-arc-read`, so I assumed your first sentence was for the edification of others. I found something to respond to in the second, which kinda pattern-matched to a question I had on my mind already ("Can't the Racket reader just construct a mutable table since what was written was a mutable table?").

As for my response to your "No..?"...

One of the purposes of `correcting-arc-read` is that (when it's used as a drop-in replacement for Arc's `read`) it makes `readfile1` return mutable tables. So if anyone had to be convinced that a reader that returned mutable hashes could be implemented at all in Racket, I thought I had shown that already. When it looked like you might be trying to convince me of something I had already shown, I dismissed that idea and thought you were instead trying to clarify what your proposed alternative to `correcting-arc-read` was.

Seems like I've been making bad assumptions and that as a result I've been mostly talking to myself. Sorry about that. :)

Maybe I oughta clarify some more of the content of that `correcting-arc-read` comment, but I'm not sure what parts. And do you figure there are any points you were making that I could still respond to? I'd better not try to assume what those are again. :-p

reply

3 points by kinnard 57 days ago | link

Sounds like I opened a can of . . . lists . . .

reply

3 points by akkartik 58 days ago | link

I have an old fork (https://github.com/akkartik/arc) that has an extensible generic pair of functions called serialize and unserialize which emit not just the value but also tagged with its type. read and write are built atop them.

reply


After the first few paragraphs, this picked up and was an interesting read. That's when it became clear this article wasn't going to try to claim Lisp is supernatural, but to explain why people think of Lisp as supernatural. :)

-----

1 point by kinnard 70 days ago | link

Well, I don't know about that . . . lisp seems pretty supernatural to me.

-----

2 points by rocketnia 70 days ago | link | parent | on: Tell Arc: Arc 3.2

It's good to see this. Thank you! :)

-----

3 points by rocketnia 70 days ago | link | parent | on: Inline JavaScript

"Yeah the whole thing should get HTML5-alived. CSS, JS and web-standards have evolved significantly since the app was originally written."

I kept wanting to bring up some of pg's writing on this topic, but I couldn't find it until now:

http://paulgraham.com/arc0.html

pg: "Arc embodies a similarly unPC attitude to HTML. The predefined libraries just do everything with tables. Why? Because Arc is tuned for exploratory programming, and the W3C-approved way of doing things represents the opposite spirit.

[...]

Tables are the lists of html. The W3C doesn't like you to use tables to do more than display tabular data because then it's unclear what a table cell means. But this sort of ambiguity is not always an error. It might be an accurate reflection of the programmer's state of mind. [...]

Good cleanness is a response to constraints imposed by the problem. Bad cleanness is a response to constraints imposed from outside-- by regulations, or the expectations of powerful organizations."

Personally, I think using semantic HTML isn't that big a deal to implement, and it seems to have practical benefits in terms of accessibility. It isn't just something the W3C is trying to impose on people arbitrarily.

And yet, the built-in features of HTML and CSS have a tendency of being very arbitrary. If you want a text box, you can have it; if you want a set of radio buttons, you can have it; if you want a set of radio buttons where one of them says "Other (please specify)" and has an associated text box, you suddenly have a significant amount of code to write. If you want to style the first letter of a paragraph, you can use a ::first-letter CSS selector... but if you want to style letters other than the first, you need to wrap them in explicit HTML elements, which unfortunately has other effects you might not want (like causing screen readers to treat each letter like it's a separate word).

Sometimes, there isn't a workaround. For instance, pages have titles, which appear in the top of the browser window. Have you ever seen a page with a title and a subtitle displayed just under it? I haven't, and short of finding a security hole that makes the browser execute arbitrary code, it seems pretty clear there isn't a way to do this.

Sometimes, there is a conceivable workaround, but it requires something like building your own text layout engine from scratch and then wrangling with a lot of obscure Unicode scripts, screen-reader troubles, text selection support, etc. There are some in-browser text editors which have to make this kind of effort just to achieve syntax highlighting.

And sometimes, there's a workaround that's a lot like building your own substantial subsystem of the browser, but it's actually fairly reasonable to do in a pinch. Like, there are a bunch of front-end frameworks for writing reactive UIs. They take in something that's pretty similar to DOM nodes (sometimes even obtained by parsing DOM nodes that aren't meant to be displayed to the user), they generate actual DOM nodes that are similar to those, and they modify those generated DOM nodes on the fly as the application state changes. In certain respects, these frameworks can save a lot of work by taking advantage of the underlying features of HTML... and in certain respects, there's extra work involved in inverting HTML's abstractions to get them to support this new indirect interaction style they weren't designed for.

Which brings me to another pg quote...

http://www.paulgraham.com/ilc03.html

pg: "But the advantage of a rewritable language is more than that it lets programmers fix your mistakes. I think the best programmers tend to work by rewriting whatever language they're using. So even the perfect language, if there is such a thing, would be very rewritable. In fact, if I had to guess, I think the perfect language might be whichever one was most rewritable."

How much code does it take for someone to implement their own "rewritten" variation of some HTML or CSS feature? Well, if they can't achieve their goal without writing their own text layout engine or their own virtual DOM framework, quite a lot.

This is important to Arc because it makes the program longer.

Arc is a language designed incrementally by starting with something Lispy and then making whatever changes will shorten Arc programs. News is a program that was written to put Arc to the test; the shorter its code is, the better Arc is doing.

If News used HTML or CSS features in very picky ways, it wouldn't be a good test: As soon as News's needs strayed slightly from the HTML and CSS features that browsers had built-in, then a heap of code would need to be written to make up the difference. When a slight discrepancy in the way a measuring instrument is consulted results in a large discrepancy in the measurement, it's not a very reliable measuring instrument.

So it seems to me that of all possible programs, News was pursued because it could get by on using HTML features in non-picky ways, pretty much in the ways they were already designed to be used. The use of HTML tables and transparent gif spacers was a well-known and once-popular, if dated, technique for achieving layout that was consistent across browsers, so pg built abstractions on that technique for News.

All that being said, I personally think it's a great improvement for News to use semantic HTML tags instead of tables, and I don't think this change really does that much to the size of the codebase (does it?). I just figure these pg writings are interesting in this context.

---

This has me thinking about html.arc....

The way html.arc is designed involves a lot of special-casing of specific HTML tags and attributes. It's almost like a full go-between layer abstracting HTML from Arc, which suggests that with some ambitious modifications to html.arc, it could turn into a DSL that compiles to HTML in a more indirect way (perhaps performing nonlocal transformations to implement things like footnotes or column breaks). This would potentially be a good place to hone the design of the HTML built-ins so that they're more abstraction-friendly and more "rewritable" as far as Arc code is concerned.

This has a lot in common with those front-end frameworks I mentioned. They abstract over and extend the features of HTML, and in doing so, they tend to make HTML's built-ins "rewritable" by exposing a new way for programmers to define their own extensions of the same kind.

-----

3 points by krapp 70 days ago | link

>Tables are the lists of html

    (ノಠ益ಠ)ノ彡┻━┻ NO PG LISTS ARE ALREADY THE LISTS OF HTML
... sorry. Don't know what came over me there.

>and I don't think this change really does that much to the size of the codebase (does it?).

Most of it is the result of moving existing code around, so I think it comes out about even. I don't know how much of a performance issue macro expansion is but there is less of it in the new code, and the HTML itself should be simpler without tables.

>This has me thinking about html.arc....

Racket has its own xml/html library and there is an sml.arc which I haven't played with yet that seems like it might be capable. html.arc seems to do both too much and too little... the attributes blacklist makes it difficult to have modern features like data attributes, and the more macros there are, the more polluted the global namespace becomes.

I've sometimes thought it would be nice if Arc supported css and xml grammar natively, but I have no idea what it would take to actually support that. And I'm probably the only person here who wants to just write html and css directly, rather than use s-expressions or concatenating strings.

-----

2 points by i4cu 70 days ago | link

> Tables are the lists of html. The W3C doesn't like you to use tables to do more than display tabular data because then it's unclear what a table cell means. But this sort of ambiguity is not always an error. It might be an accurate reflection of the programmer's state of mind. [...]

Note that it's traditional html tables pg is referencing. I could be wrong, but my understanding is that the 'display' properties options: 'table', 'table-row', 'table-cell' etc, were not well supported (in IE particularly) or even existent at the time pg wrote the code. So he may not believe the same now.

> The predefined libraries just do everything with tables. Why? Because Arc is tuned for exploratory programming, and the W3C-approved way of doing things represents the opposite spirit.

I don't agree. My understanding is that traditional tables are ridged, which is why they suggest you only put data into it. They are highlighting that it's, generally, not suitable for other things. Divs allow for more flexible manipulation. For example, I can create a table and then decide to break out of the table somewhere in the middle of it's content to render some component. Or I can make a table and morph it into something different by only changing it's properties via css/js.

But pg was not writing web apps. He was writing web pages and then calling a new page for, pretty much, ANY change. So from that perspective (where you can macro away server side) you can see why pg would just put it all in a table and highlight how using arc macros will let you do more compose-able things.

-----

3 points by i4cu 69 days ago | link

FYI, obviously HN is a Web app, but I'm referring to the modern view of Web Apps where the workload is being done client side via javascript.

PS looks like my IP was banned. I think I made too many edits to a comment or whatever. So I probably will not be here for a while...

-----


This came up during ar development, although I don't remember who brought it up. I think aw was about to implement it but was concerned that it would be annoying at the REPL to have all the program's bindings be looked up eagerly (aka for them to not be "hackable"). It could be annoying for mutually recursive macros, too, although those are already tricky enough that it probably wasn't a big concern at the time.

I remember recommending an upgrade: Instead of generating:

  `(,my-func ,a ,b)
Generate something that preserves the late binding of mutable global variables, like this:

  `((',(fn () my-func)) ,a ,b)
I seem to remember aw wasn't convinced this cruft was going to be worth whatever hygiene it gained, even after I recommended a syntactic upgrade:

  `(,late.my-func ,a ,b)
Since aw values concision a lot, perhaps the issue was that most programmers would surely just write ,my-func in the hope that it would be rare (or even incorrect) to ever want to rebind it, and then other programmers would suffer from that decision.

But Pauan followed a design path like this in Nulan and/or Arc/Nu. In Pauan's languages... actually, I think I remember a couple of approaches, and I don't remember which ones were real.

One approach I think I remember is that `(my-func a b (another-func c) d) inserted mutable boxes for my-func and another-func into the result, but it would just implicitly unquote a, b, c, and d, because variables at the beginning of a list are likely to be mutable globals and variables in other positions are likely to refer to gensyms.

There might have even been an auto-gensym system in that quasiquote operator at some point.

---

I liked this approach a lot at the time. That was the same time I was working on Penknife. I was trying to avoid name collision with first-class namespaces in Penknife, but I still wanted macros to work, and I was trying to take a very similar approach. (I couldn't take exactly the same approach because my macroexpansion results were strings; instead, I think I allowed regions of the macroexpansion result to be annotated with a first-class namespace to use for name lookup.)

When Penknife's compile times were abysmally long, I started to realize that even if I found an optimization, this was going to be a problem with macros in general. Anyone can write an inefficient macro, and anyone who can put up with it can build a lot of stuff on top of it that other users won't be able to appreciate. So I started to require separate compilation in my language designs.

With separate compilation in mind as a factor for the language design, it no longer made sense to put unserializable values into the compiled code. Instead, in Penknife's case, I devised a system of namespace paths to replace them. The namespaces were still first-class values, but one thing you could do with a Penknife macro was get hold of its first-class definition-site namespace, so the macroexpanded code could refer to variables in distant namespaces by specifying a chain of names of macros to look them up from. And this kept things hackable, too, since you could mutate a macro's definition-time namespace explicitly (not that many programs or REPL sessions would bother to do that).

Not long after, I set a particular goal to make a language (Era) where the built-in functionality was indistinguishable from libraries. Builtins aren't hackable from within the language, so the module system needs to make it possible (and ideally easy) for people to write non-hackable libraries.

(Technically they only need to be non-hackable to people who don't know the source code, because once you know the source code, you know you're not dealing with builtins. I intend to take advantage of this to make the language hackable after all, but it's going to take essentially a theorem prover in the module system before it's useful to tell the module system you have the source code of a module, as opposed to just using that source code to compile a new module of your own behind the module system's back.)

Anyhow, this means I haven't put hackability at the forefront for a long time.

I think the embedding-first-class-values approach will work, and I think late binding is workable (by using late.my-func) and there's a workable variation of that late binding approach to enable separate compilation too (by using namespace paths made out of chains of macro names). So I like it, but I just have this stuff to recommend for it to make it really tick. :)

---

By the way, it's good to see you! I wondered how you were doing.

-----

3 points by waterhouse 101 days ago | link

With interpreter semantics, in which a macro gets expanded anew every time a function is called, the late binding comes for free. ;-) Then, if you want the runtime performance that comes from compilation, you optimize for the case where people are not redefining functions, and invalidate old compilation results when they do. I think that rough plan should be doable, though I haven't gotten around to implementing enough of a system to say how well it works. But I think that's the only way to get anything close to good performance in Javascript VMs (not that they expand macros, but I expect they inline function calls and such, which requires similar assumptions about global definitions), and it seems to have been done.

For separate compilation, it does seem clear that what gets serialized will be references like "the object [probably a function] named 'foo in module bar", and structures (s-expressions or otherwise) containing such references. Given that compilation implies macroexpansion, you do have to assume (or verify) that the macros from other modules are what they used to be—and that non-macros (used in functional position at least) are still non-macros. If you have a full-blown Makefile kind of build system, then by default I suppose every file's output depends on the contents of every other file that it uses; or, as an optimization, depends merely on the exact set and definitions of macros exposed from those files. (In the C++ system I encounter at work, code is separated into .cpp and .h files, and editing a .h file causes the recompilation of every .cpp file that recursively depends on it, but editing a .cpp file only causes its own recompilation. If you wanted to imitate that, I guess you'd put macros into a distinctively named set of files, and forbid exportable macros anywhere else.)

---

Thanks! I've sold out and have been working for a medium-sized company doing mostly C++ and bash (the latter is unbelievably useful) for the past 3.5 years. I make intermittent progress on the side doing other things.

-----

More