Arc Forumnew | comments | leaders | submitlogin
Issues with extensible def
4 points by akkartik 2319 days ago | 1 comment
I've been playing for some time now with writing all my code in a style of stepwise refinement. For example, here's how I extend ( len to understand arc's queues:

  def len(x) :case (isa x queue)
This can be a really elegant style for separating concerns. All my code about queues is in one place, even if it updates primitives. But there are costs:

a) Performance. What used to be a simple macro call can now need to be expanded multiple times. Especially if you build the language from the ground up in this style, the increase in calls has a compounding effect ( It's all very well to say 'design the language without regard to performance', but it's five orders of magnitude slower than anything else out there.

I've been working on a partial-evaluator to 'inline' a lot of this redundant work, but it's been slow going. Check out I'm not yet at the point where (if a b) uses the compiled if rather than the macro to support more than 2 branches.

b) Infinite loops. Especially when dealing with core language features like assignment and conditions and equality, it's easy to overload too much. See, for example, how I implement isa:

So far I've been rationalizing these problems as transients that I have to write a test case for and debug once, and never deal with again. However it's getting to the point where it's really hard to spot the infinite loop, to insert a print in just the right place to see the repetition. Most recently, I found myself trying to overload if to support pattern matching, and realized that if the :case clause ever calls if, even indirectly, I end up with an infinite loop ( And the language doesn't even do all that much yet!

One idea to deal with this: I've been mulling changing the semantics so that new :case clauses on existing functions aren't used in code already loaded. It would improve performance and also hugely mitigate the headache of debugging infinite loops. But it feels like a compromise of precisely the sort that I've been railing against.[1]

Anyways, just wanted to toss this out there to get your reactions.


[1] Rather surprisingly, I can't find a decent link for my rant. Was it all in my head? The best I can find is Or maybe Hmm, let me try to state it.

A lot of the benefit of lisp has traditionally come from the repl. And a lot of the benefit of the repl comes from the fact that you can type incorrect code in, and you won't be bothered by the toolchain until you try to run it. Functions can be defined in any order, called in other function bodies before they're defined, and lisp won't complain. Dynamic types and late binding are specialcases of this ethos.

The value of this approach isn't just that you can be disorganized when sketching out a new program. Eliminating machine-driven constraints affords freedom to organize code. Organizing code for readability is hard. So hard that most complex programs in any language are poorly organized. They start out with structure following behavior, and then that mapping gradually rots. Axes of organization that made sense before cease to do so, or are swamped by the needs of a new aspect. Definitions in a file are more intimately connected to distant definitions than to nearby ones, and the reorganization never quite catches up. Existing programmers can handle this gradual semantic drift, but it increases the barrier to new programmers contributing to the project. Given how reluctant most of us are to even try, every constraint added by our tools makes it less likely that we'll reorganize everytime the emphasis of the codebase switches directions. Many's the time I've tried to reorganize the header files in a C codebase and just given up after a few hours because it was just too hard to manually solve the graph constraint satisfaction problem that would make the compiler happy.

So the fact that lisp started out with very few constraints is super super key to my mind. But most of the world doesn't seem to think so. Both lisp and scheme have over time added features that constrain code organization. Languages ahead of their time, they have stolen package and module systems from the static languages around them (partly to simplify the lives of compiler writers). This has happened so gradually that I think there isn't a realization of just what has been lost.

Wart employs an init-style convention for deciding what files to load, and in what order ( That idea stems from the same considerations. Renaming a file should require: step 1, renaming the file. There ought to be no step 2. That's how easy I want reorganization to be. And I want it to be as true as possible at all levels of granularity.

Richard Gabriel: "If I look at any small part of it, I can see what is going on -- I don’t need to refer to other parts to understand what something is doing. This tells me that the abstractions make sense by themselves -- they are whole. If I look at any large part in overview, I can see what is going on -- I don’t need to know all the details to get it. It is like a fractal, in which every level of detail is as locally coherent and as well thought-out as any other level." ("The quality without a name,"


Oh, another variant of this rant, if you're really bored: Does it even seem like the same rant? Maybe it really is all in my mind.

2 points by akkartik 2318 days ago | link

Ah, I solved the infinite loop!! I'm glad I took the time to write this out here.

Let's start with these definitions:

  (def foo() 34)
  (def bar() (foo))
I'd like to make sure that bar always calls the same foo, even if foo is later overridden.

  wart> (bar)
  wart> (def foo() 35)
  wart> (bar)
  35 ; Somehow should still return 34
To do so, I save the current binding of foo when bar is defined, and restore it inside the body.

  (let $foo foo             ; lexical load-time binding
    (def bar()
      (making foo $foo      ; restore foo for the entire dynamic scope of this body

  wart> (def foo() 36)
  wart> (bar)
(making is wart's way of creating a new dynamic binding.)


I'm permitting foo to be extended but freezing its definition in a specific call-site. Compare this with racket's approach, which is to freeze the definition of foo and prevent any extensions at all call-sites. Racket's approach is similar to Java's final keyword.


I wish I could save this pattern as a macro:

  (freezing foo
    (def bar()
But freezing would need to 'reach into' its subexpressions. Hmm, I think I can bake this into the implementation of extensible mac. Too bad I won't be able to use stepwise refinement in implementing _that_..

Update: I've fixed mac so :case expressions can't trigger infinite loops: HOWEVER: this commit makes my tests 8 times slower. It's a shattering illustration of how these calls can compound.