Second shameless plug: I've changed Nulan significantly since I last showed it off here. Here's a page describing Nulan's syntax system, which I believe is superior to all existing Lisp syntax systems:
If I follow everything on that page, this allows you to programmatically control where lists are segmented, and to generalize infix, resolving function/macro calls to be anywhere in the list. In other words, macro foo doesn't have to look like
(foo ...)
it can also look like
(... foo ...)
Does this make the grammar context-sensitive? Does it often introduce ambiguity? Have you run into non-terminating parsing? Can you talk about how it compares with oMeta? (Are you sure you aren't greenspunning it? :)
"If I follow everything on that page, this allows you to programmatically control where lists are segmented [...]"
Something like that, yes.
---
"[...] resolving function/macro calls to be anywhere in the list. In other words, macro foo doesn't have to look like"
Not quite. The parser is very simple: it just pushes symbols around. Which means that when you use a syntax rule like this:
$syntax-infix "foo"
Then the parser will rewrite this:
1 foo 2
Into this:
foo 1 2
This all happens before macros are expanded, so macros receive a uniform list representation. In this way, it's similar to wart's system, except that it's much more flexible and powerful.
---
"Does this make the grammar context-sensitive? Does it often introduce ambiguity? Have you run into non-terminating parsing?"
I have no idea, no, and no. It's all just simple list transformations, similar to what macros do.
---
"Can you talk about how it compares with oMeta? (Are you sure you aren't greenspunning it? :)"
I haven't use oMeta, but from what I understand, it uses something like PEG parsing, which is completely different.
Think of my system as being like macros: you take this list of symbols and that list of symbols and return this list of symbols.
The difference with macros is that my system has left/right associative, prefix/infix/suffix, precedence, and a slew of other options too.
The key insight is that unlike most syntax systems which return a single expression, Nulan's syntax system returns a list of expressions.
And then the syntax rules operate on this list, which effectively lets them look-behind and look-ahead arbitrarily many tokens, but only within the list.
PEG parsing lets you look-ahead as many tokens as you like, but not look-behind. Nulan's system supports both, but the amount of look-ahead/behind is controlled by the indentation, so everything is handled in a consistent way.
The - syntax rule has a precedence of 70, which is equal to 70, so it stops parsing. It now calls the action function for * with the arguments left, symbol, and right, which returns {* 50 20}
Now we go back to the + syntax rule, which looks like this:
It calls the action function for + with the arguments left, symbol, and right, which returns {foo {+ bar {* 50 20}}}:
left: {foo {+ bar {* 50 20}}}
remaining: {- 30}
Now it continues parsing with a precedence of 0. - has a precedence of 70 which is greater than 0, so it recursively calls the parser with a precedence of 70: