T-expressions are my attempt at answering the question 'what's the minimal clean notation for expressing Prolog terms in S-expressions?'
PicoLisp's Pilog, for example just decides that 'a term is a list'. Which is very clean and simple - but unfortunately it means you can't distinguish between a term AND a list, and it's often very important to tell the two apart.
So the simplest solution is just to reserve one symbol (I use '/' because it's available and not special-syntax on most Lisps... though any symbol would do, and if I were writing from scratch, I might think about repurposing '.') to indicate 'the start of a Prolog-like term'.
The rest of that post series is really just about teasing out some of the implications of this, because I think it opens a lot of interesting possibilities for adding a very minimal notion of 'typed expression' to Lisp. Minimal in that there is no specification of what the meaning of a type is; just that a certain expression is more than just a raw list.
There's a number of applications of this: we can replace almost all special non-S-expression 'reader macro' syntax, for example.
Also having lists which are not Lisp functions/macros means we could implement 'implicit cons', which allows us to drop a lot of complexity in list-builder expressions. For example:
If x were (a b) and y were (c d e) then a standard Lisp expression might look like this:
(cons 1 (cons (car x) (cdr y)))
(1 a d e)
But if we had a Lisp where function calls (or all forms) were term-expression terms, ie prefixed with '/', then we can write that expression as:
(1 (/ car x) / cdr y)
Suddenly we get a lot cleaner notation... fewer nested parens, less awkward syntax, and we can apply this technique to a lot of things.
For another example: say we had a list which was generated from a procedure (local or remote) and we wanted to show that it continues at a remote website:
where "more-at-URL" is either a function that evaluates to a list, or just a piece of syntax that's parsed by some higher level.
This is a basic "lazy list" technique, well known to Lisp people for decades.... but the fact that we don't need to hide it behind lambdas, we can do this inline and mark ANY list up with a CDR that is an arbitrary expression, I think is an important capability that is worth thinking about.
I also think that we can then take this syntax and extend it down to the level of formal logic, because what we are essentially doing is writing (slightly extended) First-Order Predicate Logic expressions... and so we might want to think about exact mappings between FOPL and our Lisp expressions. And vice versa: I think we need to look hard at what Lisp, especially the S-expression 'dotted pair' allows us to express that standard vanilla FOPL does not. And ask what that extra piece of information might imply for logic - in the vein of, eg, what the HiLog researchers found, since this is a very similar kind of syntactic extension to FOPL as HiLog is. See, for example, the 1980s Sinclair Spectrum MicroProlog, which uses S-expressions internally to store Prolog expressions, and which discovered the dotted-pair technique for FOPL expressions and called it 'meta-variables'.
Final note: My syntax in the blog posts assumes that I have a custom reader macro and so I don't put a space after the / wherever it appears, in order to make the expressions look a but cleaner... but if you were embedding T-expressions on top of S-expressions in an existing Lisp, you would put a space there. And also you'd need to sort out some way of disambiguating in all cases between /-as-symbol and /-as-syntax. This last bit might be a little tricky.
> Really, #t and #f are not proper Arc booleans, so it makes sense that Arc can't tell what type they are.
Really, #t and #f are not proper Arc anything, but the language apparently handles them so IMHO Arc should also be able to know what type they are. Otherwise, I fear, this will become a Hodge Podge language that will lose appeal.
Personally I don't care if Arc supports booleans. I only care that it can translate booleans (when need be) to a meaningful Arc semantic. That said, if we're going to support booleans then let's not create partial support.
That documentation may be wrong. On the other hand, it may be correct in the context of someone who is only using Arc, not Racket.
There are a lot of ways to conceive of what Arc "is" outside of the Racket implementations, but I think Arc implementations like Rainbow, Jarc, Arcueid, and so on tend to be inspired first by the rather small set of operations showcased in the tutorial and in arc.arc. (As the tutorial says, "The definitions in arc.arc are also an experiment in another way. They are the language spec.") Since #f isn't part of those, it's not something that an Arc implementation would necessarily focus on supporting, so there's a practical sense in which it's not a part of Arc our Arc code can rely on.
(Not that any other part of Arc is stable either.)
But now that I look closer at the ac.scm history (now ac.rkt in Anarki), I realize I was mistaken to believe Arc treated #f as a different value than nil. Turns out Arc has always equated #f and 'nil with `is`, counted them both as falsy, etc. So this library was already returning nil, from Arc's perspective.
There are some things that slip through the cracks. It looks like (type #f) has always given an "unknown type" error, as opposed to returning 'sym as it does for 'nil and '().
So with that in mind, I think it's a bug if an Arc JSON library returns 'nil or #f for JSON false, unless it returns something other than '() for JSON . To avoid collision, we could represent JSON arrays using `annotate` values rather than plain Arc lists, but I think representing JSON false and null using Arc symbols like 'false and 'null is easier.
> This parameter determines the default Racket value that corresponds to a JSON “null”. By default, it is the 'null symbol. In some cases a different value may better fit your needs, therefore all functions in this library accept a #:null keyword argument for the value that is used to represent a JSON “null”, and this argument defaults to (json-null).
If you set the JSON null to nil before running your example, it works as you'd expect:
I think we should get rid of json.rkt and use the Racket built-in. It's way better documented, and we should use that one. (But I'm not going to delete json.rkt myself, particularly when I know someone is working with it.)
The JSON solution is a quick and dirty hack by a rank noob, and I'm sure something better will come along.
And in hindsight the problem with the (body) macro should probably have been obvious, considering HTML tables are built using (tab) and not (table). I'm starting to think everything other than (tag) should be done away with to avoid the issue in principle, but that would be a major undertaking and probably mostly just bikeshedding.
It's great to see a JSON API integrated in Arc. :)
I took a look and found fixes for the unit tests. Before I got into that debugging though, I noticed some problems with the JSON library that I'm not sure what to do with. It turns out those are unrelated to the test failures.