Arc Forumnew | comments | leaders | submit | rntz's commentslogin
1 point by rntz 6244 days ago | link | parent | on: Simple multimethods: lib/multi.arc

I'm not sure exactly what you're getting at. Admittedly, what I did there is somewhat hackish (although my original method was worse). The default-method argument is just that: a default method which will be called (with the arguments to the multimethod) if no other method is found. But the expression that makes it up is only evaluated if necessary (lazily, in an 'or expression). So if it produces an error, then when that expression is evaluated, that is, if and only if we have failed to find a matching method, it produces an error; and since, in the macro itself, I know the name of the multimethod, I can substitute this into the default for 'default-method, and have nicer error messages.

-----

1 point by almkglor 6244 days ago | link

Ah sorry, misread the code, forgot the 'apply bit around the form.

-----

4 points by rntz 6245 days ago | link | parent | on: Simple multimethods: lib/multi.arc

It's true that if the dispatcher expects exactly one argument, you can't add optional arguments. The solution is very simple: don't make dispatchers which expect exactly n arguments. Make all your dispatchers take rest parameters which they ignore. In fact, 'multi-keyed already does exactly that, so if you used (multi-keyed area 'shape), your example would work exactly as expected. It's not a hard fix.

Regarding reasons for not implementing Clojure-style multimethods in Arc-F, how about this one: because you can implement them in plain old arc! Don't put stuff in Arc-F if it can be done in Arc or Anarki already. Interoperability is good - we don't need to fork the community as well as the language.

-----

1 point by almkglor 6245 days ago | link

> It's true that if the dispatcher expects...

It's not just the number of arguments. One advantage of CLOS-style multimethods is this:

  (def bam (a b)
    (err "collision type unknown!"))
  (defm bam ((t a ship) b)
    (destroy ship)
    (destroy b))
  (defm bam ((t a ship) (t b missile))
    (destroy ship)
    (destroy missile)
    (add-points missile!source))
Because of the computation of keys, you can't exactly implement the above using clojure-style multimethods without tricks like method bouncing.

Like I said: already implemented clojure-style multimethods. And tried it. So yes: I'll continue bashing my head implementing CLOS-style multimethods, because while clojure-style multimethods are cute, they're not good enough for all cases. Arguably neither are CLOS-style multimethods, but at least we have an alternative choice.

Edit:

Also, there's a good reason for implementing this in the scheme-side: efficiency. Your implementation of clojure-multimethods allocates a cons cell for each argument to the multimethod. The scheme-side implementation does not, because on the scheme-side I have access to the ar-funcall* functions.

Efficiency of course is not a concern, except when it is.

As an aside, there are several bits of Arc-F that look like they're implemented in Arc, but are actually implemented in the scheme-side. There's an Arc implementation of them in arc.arc, which is labeled as "IDEAL", while the actual binding to the scheme side is labeled as "REAL". For example, the basic compose function is actually implemented in the scheme-side, but there's a reference Arc implementation in arc.arc marked "IDEAL". The only reason they're on the scheme-side is due to efficiency. Ideally, they would be in Arc: realistically, they are better implemented in the base system.

-----

1 point by rntz 6244 days ago | link

This is true, and indeed I mentioned it in the OP; you can't do type-based dispatch with clojure-style multimethods except on exact matches. You also can't get method chaining. But clojure-style has the advantage of being damn simple, and easily permits dispatch based on non-type-based conditions. CLOS style has the advantage of being more flexible about dispatch and integrating well with OO methodologies.

I'm not trying to convince you that CLOS multimethods are bad or not to implement them; a full implementation for Arc or Arc-F would be _very cool_. CLOS is without a doubt my favorite thing about Common Lisp. But Clojure-style multimethods are not "cute" or useless. They're just not a universal panacea. Very little is.

-----

1 point by almkglor 6244 days ago | link

> "cute"

For me, cute means something really really nice, not necessarily useless. Like cute mature women, for example. Or better: cute girls, with guns. LOL.

Method chaining may require us to rethink PG's type system, at least if we want to handle a drop-down to a more generic type (which arguably is the more interesting bit about chaining). It's reasonably easy to drop from the "argument 2 matched type T" to "argument 2 matched no types", but how about when we want to drop from "argument 2 matched derived type D" to "argument 2 matched base type B"?

Waa.

We would have to have an operator which determines if D is derived from some random type B, and forcibly standardizing on it. This is going to make my head explode.

-----

5 points by rntz 6252 days ago | link | parent | on: NewLisp - will Arc ever get to surpass this?

Arc hasn't DONE anything big yet.

Not that this is necessarily a bad thing - it's good to take your time thinking about hard problems. I just hope the Arc community doesn't get too pissed off with pg's slow pacing that they go off and create yet another could-have-been-great-but-for-x-y-and-z new lisp dialect.

-----


Code very similar to this was posted on this forum before, I think by almkglor, but I don't think it made its way to anarki. I'm using it and I think it's quite useful, so I put it there. Use is pretty self-explanatory:

    (set foo 'global)
    (let foo 'local (eval 'foo))
    => global
    (let* foo 'local (eval 'foo))
    => local

    (set foo nil)
    (slet foo 1 (slet foo 2 (eval 'foo)))
    => (2 1)

    (dynvar input&)
    (def read1 () (read input&))
    (let* (input-stream&) "foo" (read1))
    => foo

-----

2 points by rntz 6272 days ago | link | parent | on: Arc's web server in Lua

On the basic datatypes of arc: you cannot remove symbols or lists from arc. Then it would no longer be a lisp. I suppose you could just have strings be symbols or vice-versa, but then you either put overhead on usage of strings for variable lookup or overhead on usage of symbols for string manipulation.

-----

2 points by sacado 6272 days ago | link

Yes, but, for example, association lists could be implemented as hash tables while beeing seen and manipulated as regular lists. That's tricky, but that should be feasible. And Clojure removed cons cells and thus "traditional" lists, but it's still a Lisp...

As for strings and symbols, I don't really know how they are actually implemented, but as far as I know, the idea is that 'foo and 'foo are the same memory location, while "foo" and "foo" are not necessarily the same object(thus allowing string mutation). Or maybe I'm wrong ?

Lua's strings are implemented the same way : "foo" is always the same location as any other "foo" and string manipulation really doesn't seem to be a problem at all...

I really think "foo" could be just an alternate syntax for 'foo (just as |foo| is) so that we still have a Lisp... Add string manipulation facilities to symbols, and you're done. In any case, characters just seem useless...

-----

3 points by almkglor 6272 days ago | link

> Yes, but, for example, association lists could be implemented as hash tables while beeing seen and manipulated as regular lists.

    (= foo '((key1 . val1) (key2 . val2)))
    (= bar (cons '(key2 . different-val2) foo))
    bar
    => ((key2 . different-val2) (key1 . val1) (key2 . val2))
Hmm.

Probably means we want to have some sort of "source" slot too, so that we can display shadowed values. Hmm. The association sublists e.g. '(key1 . val1) can probably be directly shared, but lists also imply an ordered set, so we need to store that info too. Hmm.

>As for strings and symbols, I don't really know how they are actually implemented, but as far as I know, the idea is that 'foo and 'foo are the same memory location, while "foo" and "foo" are not necessarily the same object(thus allowing string mutation)

This is correct. And when you really look at it, changing strings as if they were arrays of characters is hardly ever done in Arc; usually what's done is we just read them off as an array of characters and build a new string.

> In any case, characters just seem useless...

Another lisplike, Skill, uses 1-char length symbols for characters (i.e. no separate character type). Also, many of its string manip functions also accept symbols (although they still all return strings).

-----

1 point by sacado 6271 days ago | link

  >  => ((key2 . different-val2) (key1 . val1) (key2 . val2))
When you don't know why your design's wrong, ask almkglor :)

Ok, well it's probably a little trickier than I thought... Maybe a dual implementation, as you suggest, would work then...

-----

3 points by stefano 6271 days ago | link

Another point to consider is that if your a-list is very small (<= 5 elements) it could be faster than hash tables. The sharing behavior could be achieved with some sort of concatenated hash-tables, a list of tables to consider in turn to find the desired element. This seems very slow though. BTW, removing a-lists would be useless: they're so simple to implement that a lot of developers (me included) would re-invent them to use in their applications.

-----

1 point by almkglor 6271 days ago | link

LOL

-----

3 points by gnaritas 6271 days ago | link

My two cents on strings/symbols, there are semantic reasons they should always be separate. It's not about string mutation, ignore the implementation, it isn't relevant.

Two symbols 'foo can always be considered to mean the same thing, two strings "foo" can't, they may be only the same by coincidence. This matters if automated refactoring tools become available because you can safely rename all occurrences of a symbol, the same cannot be said of strings.

Mixing strings and symbols is a bad idea, they are different things and should remain that way.

-----

1 point by sacado 6271 days ago | link

Well, we could at least have a very lightweight bridge between both worlds, by allowing (and automatically converting) symbols into string where strings are needed and vice versa.

Code crashing because, for instance, you tried to concatenate a string and a symbol is rather annoying, but these bugs keep happening and the developer's intention is rather obvious there.

-----

4 points by gnaritas 6271 days ago | link

Oh I don't disagree, I'm actually a Smalltalker and I'm accustomed to Symbol being a subclass of String and a String being a list of Characters. I actually have refactoring tools so I just wanted to point out that they are different for more than just implementation reasons and there's a logical reason to have symbols beyond mere optimization.

I just hang out here because I like to see how a new Lisp is born and how you guys think, even though I don't actually use Arc.

-----

3 points by rntz 6275 days ago | link | parent | on: Ask The Community: Lexical Scoping

The main reason is that it's simpler.

The second reason is that the only (AFAIK) example of standardized hygienic macros - scheme - is a hell of a lot harder to use than it really needs to be.

The third reason is that unhygienic macros are more powerful, because they LET you subvert lexical scope if you want to. Sometimes macros want to mess around with variables you didn't explicitly pass in as arguments but which are not available in the scope of the macro's definition. Rare, but it happens.

For example, what if a macro wants to define something that is not passed in as a symbol, and have it available in the scope in which the macro was invoked? All of the anaphoric macros (eg. "aif") are of this type, and they cannot be done in a pure hygienic system.

IMO, the best idea would be to have a system which is hygienic by default, but in which you can explicitly say "hey, I want to use this variable dynamically". In fact, I also think this is the way that scoping should work in general: sometimes you'd like to be able to bind a variable dynamically - for example, you want to debug a function call within a certain period of code execution, so you dynamically bind the symbol to a wrapper function. This can be emulated by setting it beforehand and unsetting it afterward, but this doesn't handle call/cc, threading, etc. correctly.

However, lexical scope only for variables is usually "good enough", and it's simpler to implement than a hybrid system; likewise dynamic scope only for macros is usually "good enough", and it's simpler to implement than either a hybrid or a lexical system. (In fact, it used to be thought that dynamic scope for variables was "good enough", and it is easier to implement [for lisp at least: just keep a running alist-stack of variable bindings] than either lexical or hybrid scope.)

-----

1 point by nlavine 6274 days ago | link

Okay, I think I get what's going on with macros. So the idea of current macros is that they can only insert references to variables in the scope of wherever they're expanding. Therefore they do operate purely on syntax. If they want to insert references to functions, you need to make sure that the functions are bound in the expansion scope, but on the other hand, the macros are conceptually clean and really really easy to implement.

For what you're saying, I suggest the old MIT Scheme macro system as an example (before they implemented Scheme macros correctly): essentially, a macro is a function of three arguments - the form it's applied to, the environment of that form, and the environment that the macro was defined in. There is a procedure to return a "symbol" that refers specifically to a name bound in a given environment, so you can make specific references to it. The macro procedure returns some sort of code (just like normal lisp macros), but it can optionally include some of these special "symbols" that refer to bindings in a specific other environment.

That is much more complicated to implement, though, and requires environments as first-class objects.

-----

1 point by rntz 6274 days ago | link

Well, I wouldn't say "on syntax", I'd say "on code" - and code only contains symbols, not "references to variables in a specific scope"; scope is determined by context - the surrounding code, which the macro can't alter: you can't substitute code into other code while retaining the same surrounding code, that's a contradiction in terms! But this is just terminology.

The old MIT Scheme macro system seems interesting from what you say - is there any place I could go to find an implementation which has this behavior? Or at least examples of code which use it? It seems like it lets code do precisely what I said it couldn't above: contain "references to variables in a specific scope", which is pretty cool. I don't think you'd need to implement environments as first-class run-time objects, merely second-class compile-time objects, with this technique, unless you also allow macros themselves to be first-class.

-----

2 points by nlavine 6273 days ago | link

Okay, I think I'm starting to see. There is quite a big difference between the way Lisp people think of macros and the way Scheme people think of them.

From the documentation, I think that the current version of MIT scheme has this behavior, so look at http://groups.csail.mit.edu/mac/projects/scheme/. (By the way, in case you run Ubuntu, the version of MIT Scheme in the repositories is broken for me.) Look in the documentation for macros (under "Special Forms"), and it's their non-standard low-level macro system. If you're interested in stuff like that, you should also check out syntax-case, which I don't know much about, but I understand is the new, cool way to write Scheme macros. It includes hygienic and unhygienic functionality. Google search for "syntax case" and you'll get some documentation.

The more I look at it, though, the more I think that Scheme macros solve a different problem than Lisp macros. I don't know what it is yet, but it would be interesting to know.

-----

1 point by cchooper 6273 days ago | link

I think you've hit the nail on the head. Hygenic macros and unhygenic macros are very different things (unlike dynamic vs lexical scoping, which are just different ways to create a function). Lisp macros are 'true' macros (Wikipedia: "Macro: a set of instructions that is represented in an abbreviated format"). Hygenic macros are more like a new abstraction that was inspired by Lisp macros.

-----

1 point by nlavine 6273 days ago | link

Well, I'd rather not argue about what 'true' macros are, but I would point out that your definition is basically data compression for programs (which, by the way, I think is an interesting approach to take to programming language design). I'm pretty sure both types of macros and normal functions would all fall under it.

As for the hygienic vs. unhygienic difference, unhygienic macros are certainly easier to define: they rearrange source code into other source code.

The one thing I can think of that hygienic macros can do that unhygienic ones can't is that while they are rearranging source code, hygienic macros can insert references to things that aren't bound to any variable in the expansion scope. The common example I've seen for this is that it lets you protect against people redefining your variables weirdly. For instance, if you insert a reference to 'car', it means whatever 'car' meant where you defined your hygienic macro, even if 'car' has been redefined to be something crazy in the place where your macro is used. The Scheme hygienic macro system also has a way to break hygiene if you want to, so it can do everything other Lisp macros can do.

I guess the question then is, is it useful to be able to do that?

And if you decide you want to be able to do that, are Scheme-style hygienic macros the right way to go about it?

(One option would be to just let you insert objects straight into forms, instead of symbols that you think should reference those objects. This would be fine unless you wanted to be able to set those things later, in which case you'd need some way to get at the actual underlying variable boxes.)

-----

1 point by stefano 6273 days ago | link

> That is much more complicated to implement, though, and requires environments as first-class objects.

Given the interpreted nature of Arc, first class environments shouldn't be too hard to implement, but in a compiled implementation it would be a lot more difficult.

-----

2 points by rntz 6272 days ago | link

They could not be implemented on top of the existing arc-to-scheme translator, because that's what it is: an arc-to-scheme translator, not an interpreter. Scheme doesn't have first-class environments, so we can't shove them into arc without writing a full-fledged arc interpreter.

-----

1 point by stefano 6271 days ago | link

I've had a closer look at Arc2.tar. You're right. I thought that runtime environments were handled explicitly.

-----

1 point by almkglor 6273 days ago | link

Depending on how environments are handled.

If the environments are temporaries created only while processing macros, and are discarded afterwards, they don't necessarily have to be difficult for compilers.

-----

1 point by absz 6274 days ago | link

For what it's worth, (PLT) Scheme can in fact do what you require. After all, there's an implementation of defmacro that ships with it (though it might be called something else)! And there's a way (if I recall correctly, a very long, verbose way, shoring up your second reason) to implement aif and the like in their system by "requesting a shadower" or something like that.

-----

2 points by rntz 6293 days ago | link | parent | on: Why I think Arc should use packages

While I like the idea of simplifying packages, I'd like to point out that packages are no universal panacea. Consider the following:

    ;; In package 'a
    (mac afn (parms . body)
      `(rfn self ,parms ,@body))

    ;; In package 'b
    (import 'a::afn 'afn)

    ;; A stupid example function
    (afn () self)

    ;; The translation process
    (b::afn () b::self)
    
    (rfn a::self () b::self)
    
    (let a::self nil
      (set a::self
           (fn () b::self)))
'afn is broken if used with packages. This can be fixed by importing 'self from the 'a package, but it's an ugly fix.

-----

2 points by almkglor 6293 days ago | link

Then 'self will have to be part of the arc::v3 interface.

There's a reason why there's an interface abstraction in my proposal.

-----

4 points by rntz 6297 days ago | link | parent | on: Struquine in Lisp (and Arc)

Well, this approach basically is the 'annotate approach, except instead of having a separate basic data type - a "tagged" object - you just use cons cells, where the car is the type and the cdr the rep.

The problem is that unless you make lists also use this representation, as Mathematica does, you can't distinguish between lists and objects of other types - a major problem, especially if you use anarki stuff like 'defm and 'defcall. Moreover, objects of this form do not evaluate to themselves, but to structurally equivalent objects. This is usually okay, but if you're using shared, mutable data structures, it's kind of problematic; moreover, it can introduce a lot of unnecessary consing. This isn't likely to be a major problem (how often are objects subject to excess evaluation?), but if it does crop up, it could make for some nasty bugs.

-----

2 points by almkglor 6296 days ago | link

> Well, this approach basically is the 'annotate approach, except instead of having a separate basic data type - a "tagged" object - you just use cons cells, where the car is the type and the cdr the rep.

Except for the case where (isa R T), where (annotate T R) returns R directly:

  arc> (annotate 'int 0)
  0

-----

1 point by rntz 6295 days ago | link

This isn't a fundamental difference. Just as 'annotate could as easily create a new doubly-wrapped object, 'annotate-cons can as easily not cons when the type is the same.

    (def annotate-cons (typ val)
      (if (isa val typ) val (cons typ val)))

-----

1 point by rincewind 6296 days ago | link

the difference between this and annotate is that this lets you write one quote less, compare:

   >(point 1 2)
   (point 1 2)
and

   >'#3(tagged point (1 2))
   #3(tagged point (1 2))

-----

2 points by rntz 6295 days ago | link

As far as I'm concerned, this is a bug/misfeature in Arc. Objects should always evaluate to themselves unless they're symbols or lists.

-----

3 points by rntz 6300 days ago | link | parent | on: Mutual recursive lambdas in a with?

This won't work if you want to define a function called 'o. Reason: (let var val ...) becomes ((fn (var) ...) val), and if var is (o ...), it becomes ((fn ((o ...)) ...) val). (o ...) is interpreted as an optional parameter, which disables destructuring.

Destructuring in lets is a nice idea, and using this with nil is a cool hack, but until and unless this behavior is changed, don't rely on it in macros - it can cause unexpected code breakage. A better way to do it would be the following (my 'fwithr is your 'mrec).

    (mac withr (bindings . body)
      (let bindings (pair bindings)
        `(with ,(mappend [list car._ nil] bindings)
           ,@(mapeach b bindings
               `(set ,@b))
           ,@body)))

    (mac fwithr (fns . body)
      `(withr
         ,(mappend [list car._ `(fn ,@cdr._)] fns)
         ,@body))
Edit: It seems almkglor has already done something like this, but his also has the "let doesn't always destructure" bug: http://arclanguage.org/item?id=7387

-----


You can't use "foo(bar)" to call foo on bar in arc, you have to say "(foo bar)". Interpreting "dot((list 0 -1 1) 0.3 0.3)", Arc first evaluates 'dot (doing nothing) and then evaluates ((list 0 -1 1) 0.3 0.3), which calls the newly created list '(0 -1 1) on the two arguments 0.3 and 0.3, which in turn causes an error because it's trying to get the 0.3rd element from the list, which makes no sense. (Also it's been given two arguments when it expects only one.) This is a case of Arc not having very informative error messages.

Also, I think there's an error in the 'dot function, as you use a '+ where it will do nothing. Arc has no infix math, so what you want is something like this:

    (def dot (g x y)
      (+ (* (g 0) x) (* (g 1) y)))
    
    (dot (list 0 -1 1) 0.3 0.3)

-----

1 point by bOR_ 6307 days ago | link

arg. that's a silly error of mine. I translated the program from ruby, and that one must have slipped.

-----

More