There seem to be two schools of thought around debugging today. The first is to minimize debugging by use of types, like in Java or Haskell. The second is to embrace debugging as an eternal fact of life, and to ease things by making code super lightweight and easy to change.
Both approaches are valid; combining them doesn't seem to work well. The combination of having no safety net at compile time but forcing the programmer to get his program right the very first try -- this seems unrealistic.
PG's style seems to be akin to sketching (http://paulgraham.com/hp.html; search for 'For a long time'). That implicitly assumes you're always making mistakes and constantly redoing code. My version of that is to add unit tests. That way I ensure I'm always making new mistakes.
I'd say both approaches you're talking about are all about failing fast, and that unit tests are a way to shove errors up to compile time manually, by running some arbitrary code after each compile. Languages that let the programmer run certain kinds of code at compile time anyway (like a type system or a macroexpander) have other options for where to shove these errors, though they may not always make sense there.
Conversely, they may not make sense in unit tests: If we want to know that a program behaves a certain way for all inputs, that might be easy to check with a static analysis but difficult (or effectively impossible) to check using example code.
"The combination of having no safety net at compile time but forcing the programmer to get his program right the very first try -- this seems unrealistic."
I'd say Arc is a demonstration of this option. XD I thought the whole point of Arc being for sufficiently smart programmers was that no guard rails would be erected to save programmers from their own buggy programs.
Anyway, if a language designer is trying to make a language that's easy to debug, static types and unit tests are hardly the only options. Here's a more exhaustive selection:
- Reject obviously buggy programs as being semantically meaningless. This could be any kind of error discovered by semantic analysis, including parse errors and type errors.
- Give the programmer tools to view the complexity of the program in intermediate stages as it simplifies. Step debuggers do this for imperative languages. Other languages may have bigger challenges thanks to staging (like macroexpansion) or notions of "effect" that feature simultaneous, time-sensitive, or tentative behavior, for instance.
- Create rich visualizations of the program's potential behavior. We discussed Bret Victor's demonstrations of this recently (though I didn't participate, lol): http://arclanguage.org/item?id=15966
- Collapse the edit-debug cycle so that diagnostic information is continuously visible as the programmer works. Again, this is something Bret Victor champions with a very visual approach. IDEs also provide this kind of information in the form of highlighting compile time errors.
- Give the running program extra functionality that exposes details of the codebase that would normally be hidden. If a program runs with a REPL or step debugger attached, this can be easy. (Also, a programmer can easily pursue this option in lots of languages by manually inserting these interaction points, whether they're as simple as printing to the console or as complicated as a live level editor.)
- Provide tools that write satisfactory code on the programmer's behalf. IDEs do this interactively, especially in languages where sophisticated static analysis can be performed. Compilers do this to whole programs.
- Provide abstraction mechanisms for the programmer to use, so that a single bug doesn't have to be repeated throughout the codebase.
- Provide the programmer with an obvious way to write their own sophisticated debugging tools. A static analysis library might help here, for instance. An extensible static analysis framework, such as a type system, can also help.
- Provide the programmer with an obvious way to write and run unit tests.
- Simply encourage the programmer to hang in there.
You don't hear people say of Arc, "it worked the first time I wrote it." That's more Haskell's claim to fame.
The dichotomy I'm drawing isn't (in this case) about how much you empower the user but how you view debugging as an activity. I claim that Haskellers would like you to reason through the correctness of a program before it ever runs. They consider debugging to be waste. I consider it to be an essential part of the workflow.
The points on the state space that you enumerate are totally valid; I was just thinking at a coarser granularity. All your options with the word 'debug' (at least) belong in my second category.
Perhaps what's confusing is the word 'debugging' with all its negative connotations. I should say instead, relying on watching the program run while you build vs relying just on abstract pre-runtime properties. It's the old philosophical dichotomy of finding truth by reason vs the senses.