Arc Forumnew | comments | leaders | submitlogin
2 points by aw 5257 days ago | link | parent

For myself I'm still finding out the patterns and abstractions for writing code in order.

For example, the rule says that code shouldn't depend on code that comes after it, but then what about recursive definitions? Asking the question leads me to discover things like extend and defrule.

And there are subtler questions that I don't really understand yet. For example, if I write unit tests that work with earlier definitions, then those unit tests should still work after the later definitions are loaded. So later definitions should extend the program while letting the earlier definitions still work. But what does that mean exactly?



2 points by akkartik 5257 days ago | link

the rule says that code shouldn't depend on code that comes after it, but then what about recursive definitions? Asking the question leads me to discover things like extend and defrule.

It's really interesting that you start with a simple axiom and end up with something largely overlapping with the principle of separation of concerns.

I say 'overlapping' because I sense you aren't quite using extend in the same way. You're using extend more rigidly (no negative connotation intended) than me, chopping your functions up into finer bits than I do.

Hmm, I find myself focusing on the parallel between overrideable definitions in dynamic languages like arc and ruby, and languages like haskell that permit defining functions in multiple clauses:

  fact 0 = 1
  fact n = n * fact(n - 1)

-----

2 points by aw 5257 days ago | link

You're using extend more rigidly

Yes, I should probably clarify that usually I'm not that rigid either. Most of my Arc programming is exploratory programming (figuring out what it is that I actually want) not engineering (implementing a solution to known goals).

Now the transition from exploratory programming to engineering is an interesting one. If I've created a program (using primarily exploratory programming techniques) and then I want my program to run on Arc-m, that's then largely an engineering effort with an easy to describe goal: I want my program to work the same on Arc-m as it does on Arc-n.

Paying the cost of "always" doing things in a particular rigid way doesn't make sense during exploratory programming. But if then I have a particular engineering task (convert this to run on Arc-m), turn introducing some rigidity (scaffolding if you will) is often useful: writing some unit tests, getting the dependencies laid out clearly.

-----

1 point by akkartik 5257 days ago | link

The benefit of interleaved tests is future-proofing, like you said. When making large, non-localized changes such as the move to a new arc, you know exactly the first test that failed. But what happens when you scale from one file to two? When multiple files of definitions depend on arc.ss?

The benefit of keeping tests in a distinct file: you can reason about the final semantics of definitions without thinking about the order in which they are orchestrated. That is a useful simplification except when making large localized changes.

I'm not certain of these ideas by any means. Just thinking out aloud. Would it be useful to analyze the tree of dependencies, and to execute tests in dependency order regardless of the order they're written in? Would that give us the best of both worlds?

-----