Good questions! I keep wanting to tie this stuff back to Arc somehow, so that people here can care, lol, but it's hard to see where to start.
I'll assume we're talking about something like RDP, but not quite RDP. David Barbour's goals for RDP include safe distributed programming, secure mashups, and sometimes even type-directed reduction of boilerplate. Thus RDP isn't necessarily RDP without an appropriate static type system, and we proabably don't need to commit to that part here.
Let's just call what we're talking about "continuous reactive" code, or "continuous" for short. :)
I'll answer out of order a bit.
"And what kind of metaprogramming models besides function/macro/fexpr do you see working on top of such a system?"
All of these actually keep making sense in a continuous system. The only things that can't technically be carried over are certain kinds of side effects.
Here's a more complete description of the differences:
- Most imperative side effects don't make sense. If (do (= x 1) (= x 0)) is executed continuously at every instant, then at no time is x just 1 or just 0. Continuous reactive programming needs its own particular side effects to deal with continuously displaying things, continuously maintaining connections to things, continuously monitoring the current state of the mouse, etc. Instead of mutation, continuous reactive code can use demand monitors, which aggregate sets of values. Similarly, we would have to avoid effects related to first-class continuations, threads, etc., unless we can make sense of them.
- Since all the effects at a given instant are simultaneous, the data/control flow doesn't need to be processed in a particular order, and it can naturally be concurrent, which in turn means each concurrent part can be undisturbed for a good period of time without needing to be recomputed. This means certain constructs like (if ...) stick out as expensive operations, since they must synchronize their inputs and outputs rather than letting them pass through concurrently.
- Eval and function calls are pretty weird operations. In a continuous program, the entire data flow graph exists like a constantly marching trail of ants, sometimes switching between different conditional branches. I think a dynamic call causes that trail of ants to sometimes snap between different iterations of a fractal pattern. My current implementation supports "functions" but doesn't support arbitrary recursion, because I keep every possible trail instantiated regardless of whether it's currently in use. It's like all the functions are inlined whether we like it or not. (I actually kinda like it.)
- If we care about secure mashups, passing a big context object around tends to be a bad idea, especially if it's implicit. Fexprs may need a redesign in order to avoid passing authority to code that doesn't need it.
"How would you build a declarative continuous state system on top of arc? Or would it be better to start from scratch?"
(Building the execution semantics is probably a bigger challenge than building the state resources, so I'll assume that's what you mean.)
While starting from scratch would keep things simpler, I don't think it's necessary, as long as there's some bridge to manage the continuous side from the procedural side and/or vice versa. A single language could have dedicated syntaxes (e.g. lisp special forms) both for procedural code and for continuous code.
I've actually been thinking about how a single (fn ...) syntax could be used for both. Pure code is a special case of error-free continuous impure code, and that in turn is a special case of error-free procedural code, so a single abstraction might cover it all.
However, dynamic errors make things more interesting. In procedural Arc code, (do (foo) (err "oops 1") (bar) (err "oops 2")) would not execute (bar) because that comes after an error has occurred. In continuous code, all of the code is executed at every instant, so there is no "after" to speak of, and (bar) executes at the same time as both errors occur. (Actually, David's approach would be to terminate the computation as soon as possible and try to roll back any effects that have occurred in the meantime, so in that case (bar) would not usually be seen to execute.)
This is enough of a jarring discrepancy that I've tabled the idea of unifying the (fn ...) syntax in the procedural language I'm working on. I think it's something that can be done, but it's more complex than I first thought.
Anyway, suppose we have two different syntaxes, or suppose (as David prefers) we don't have a continuous (fn ...) syntax at all but instead use concatentive programming techniques.
Either way, it's painful to bridging between one style of computation and another.
Procedural code managing continuous code: Procedural code has to deal with continuous signal input similarly to how it would traditionally deal with keyboard and mouse input, and it has to deal with continuous signal output kind of like drawing to a canvas. This means writing 2-5 imperative code blocks for every bridge, at least until some common bridge patterns can be abstracted away.
Continuous code managing procedural code: One way to use continuous code to trigger an imperative action would be to build a continuously updating collection of the imperative actions we want performed, annotating each one with a timestamp so it's clear when it's supposed to happen exactly. This is cumbersome enough that I haven't tried it yet, but it's basically the kind of FFI layer that a continuous-only language would need in order to play nicely with existing OSes and protocols.
Sorry for using so many JS metaphors. :) Aside from Rainbow's UI experiments, I don't know of any Arc applications which continuously monitor or display something.
In case anyone wonders what's wrong with (if ...), I've put together a demonstration of translating Arc-like code into the flavor of generalized arrows David and I use for implementing this stuff:
( | + | ) | | |
( foo(true) + foo(nil) ) bar baz qux
foo(true) bar baz qux + foo(nil) bar baz qux
| | | | + | | | |
Drop Drop Drop | + Drop =Apply= Drop
| + |
In this graph, sum types (A + B) are edges separated by + and product types (A * B) are simply juxtaposed edges. I've labeled the edges around Disjoin to show which signals go where.
The "Disjoin" operation is the distributive law on product and sum types, taking ((A + B) * C) to ((A * C) + (B * C)). In simple cases, it has to synchronize C with a combination of A and B so it knows which output to send C's packets down.
The "Merge" operation takes (A + A) to A, forgetting which branch it was on. In simple cases, it has to synchronize the original two signals becuase the output should only be inactive when both inputs are inactive. If we're using dynamic typing, we may also need to check the inputs dynamically to make sure their periods of activity don't overlap.
Overall, one call to (if ...) causes multiple synchronization points in the code.
You might wonder what Arc could use here if not (if ...), and my guess is it could use a structured data type with associated special forms that can be implemented with implicit concurrency:
(def left (x) (annotate 'left x))
(def right (x) (annotate 'right x))
; Sum types distribute over product types.
; ((A + B) * C) -> ((A * C) + (B * C))
(disjoin (left a) c) --> (left (list a c))
(disjoin (right b) c) --> (right (list b c))
(disjoin other c) --> (err "Can't disjoin")
; Sum types are commutative.
; (A + B) -> (B + A)
(mirror (left a)) --> (right a)
(mirror (right b)) --> (left b)
(mirror other) --> (err "Can't mirror")
; Sum types are associative.
; ((A + B) + C) -> (A + (B + C))
(assocsl (left (left a))) --> (left a)
(assocsl (left (right b))) --> (right (left b))
(assocsl (right c)) --> (right (right c))
(assocsl other) --> (err "Can't assocsl")
; (A + (B + C)) -> ((A + B) + C)
(assocsr (left a)) --> (left (left a))
(assocsr (right (left b))) --> (left (right b))
(assocsr (right (right c))) --> (right c)
(assocsr other) --> (err "Can't assocsr")
; (A + A) -> A
(merge (left a)) --> a
(merge (right a)) --> a
(merge other) --> (err "Can't merge")
; (A -> C) -> (B -> D) -> ((A + B) -> (C + D))
((lift-sum l r) (left a)) --> (left (l a))
((lift-sum l r) (right b)) --> (right (r b))
((lift-sum l r) other) --> (err "Can't lift-sum")
This is quite a bundle of special forms to add, and I'm probably forgetting a few. I'm not exactly sure how to implement them, especially considering they would have to deal with Arc's arbitrary recursion. ^_^;