Yeah, that's a pretty good example of how an design hole might play out. :) The developer wants to present a multiple-choice question, not a text box that happens to have default options off to the side. If they add an "other" choice, that's plan B, for when plan A isn't quite what the user wants to participate in.
If we say the developer has plan A, plan B, plan C, and so on, it's probably more honest to say these are a continuum. They're like layers of sediment, and a design hole in one layer just means we can step down into the next one. But the deeper levels of developer intentions are more fluid and fickle, and after some point (plan N, plan O, plan P...), the developer has no idea what they want. They'll just leave the problem for someone else to solve, whether it's their future self, a mind clone, a client, a friend, etc.
In this blog post, I imagined that cliff would be pretty drastic; one portion of the intentions is absolutely precise, and the other part is so open-ended that it must be handled by an intelligent representative. The dividing line is whether the program code contains explicit instructions, or whether it invokes an error mechanism. But it would be nice to acknowledge that this isn't the only line on the map.
Hmm, here's a possible two-dimensional map:
Rigor 1: Code so rigorous and comprehensive, yet with such a simple interface, that you could call it a mathematical theory. Code like this almost never needs to change, but sometimes it isn't the right code for the task at hand.
Rigor 2: Code that is built using simple mechanical metaphors on a simple mechanical foundation (like most of today's programming language semantics), but is expected to change to some degree. This layer might be further divided into external API definitions, libraries, and application code, in order of decreasing stability.
- (In the blog post, my dividing line was somewhere around here.) -
Rigor 3: Code that relies on intricate and data-heavy techniques but can still be engineered in fuzzy ways, like AI.
Rigor 4: Code that relies on a true universal intelligence embedded in the program. Mind clones go here. So do human pilots.
Stubbornness 1: Code that gets deployed with the program and always stays. Without this code, it just isn't the same program.
- (In the blog post, my dividing line was somewhere around here.) -
Stubbornness 2: Code that gets deployed with the program but actually isn't as precise as it looks. Perhaps it represents initialization data that may change over time due to external input, or perhaps it describes an ideal outcome that an AI system will try to approximate.
Stubbornness 3: Code which can be modified, and which is completely useless until that modification actually occurs, except perhaps for the useful behavior of begging an external system for help.
Actually, I hesitate to give the above lists without mentioning some other interesting kinds of code that don't quite seem to have a place in those two dimensions. I think it's actually possible to organize these outliers into two more dimensions:
Immersion 1: Code which exists in such a transitory way that the developer hardly even considers it their code in the first place. For instance, a player can control Mario, but Mario is just a part of a much bigger program beyond the player's control, and (I think) players rarely consider themselves to be developers of Mario control programs.
Immersion 2: Code which communicates with the developer in a way that's extremely inexpensive for them both. A player can control Mario without bothering to open a text editor, so something like this might be possible for programming too.
Immersion 3: Code which, if modified, somehow exposes its modifications to the developer(s) for possible inclusion in future deployments. Copyleft licenses are a non-mechanical example of this. Another example is a personal shell script which never leaves the developer's machine to begin with.
Immersion 4: Code which is completely detached from the developer once it's deployed. (In the blog post, I only considered this kind of code.)
Openness 1: Code which an external system can modify using tools very similar to the ones that were used to write the original code.
Openness 2: Code which was created using certain tools but must be modified using others, like a special reflection API, scripting language, or bytecode generation.
Openness 3: Code which can't be modified.
(In the blog post, this dimension was irrelevant to me.)
(Stubbornness 1 and Openness 3 sound like the same thing, and I think Immersion 0 would be the same thing too. It might help to reverse the rankings on some of these axes; I just put them in an order that made them relaively easy to explain.)
So there, I guess I probably won't divide programs in quite the way I described in that blog post. I now have plenty of rope to confuse myself with. :-p
Just in case... which version are you trying to install?
You mention "MzScheme," but don't let the arclanguage.org front page fool you. Arc works on up-to-date versions of Racket, not just the old MzScheme 372. See https://sites.google.com/site/arclanguagewiki/ for more comprehensive installation information.
I see some useful-looking Web search results for [Racket on Raspberry Pi], and they're much more recent than 2007.
I really liked this point: "In preference to having usage errors, the system tends to support ad hoc behaviors that seem useful enough for now." I think using this principle you can deal with design errors in useful ways while avoiding some of the controversy about cloning developers' minds and such. What I'm thinking of is the error mechanism that lets you get around whatever abstraction is causing problems at the moment.
For example, say you're designing a survey with multiple-choice questions. A glaring design hole with multiple-choice questions is, what if my answer isn't one of the choices! The tried-and-true error mechanism for this is to provide an "Other" choice that opens up a text box where the user can provide whatever answer they want. "Other" is a multiple choice answer that breaks down the whole abstraction of multiple choice answers. There's no answer you could have that it can't handle! "Other" is so awesome. IMO it's even superior to having a clone of the survey designer's mind to chat with when I have an answer that's not in the list.
Thanks for trying this! I'm very curious to see if you manage this, but I don't think anybody here has tried this before. You're probably most likely to find help in he racket list that you've been searching.
Again, do come back and tell us if you get this working and what it took.
This might be heresy, but I'd recommend also taking a look at http://lobste.rs. It looks like HN but is built on a different open-source platform. It supports email notifications out of the box, which is crucial in the early stages when your users haven't yet gotten into the habit of visiting your site.
All this is nothing arc+HN can't be modified to do, of course. But it doesn't right now.
While I don't have a Windows machine to test on now, I had gotten Arc working quite easily before on one (I was not running HN, which would exercise more code paths than were executed). Try it and see what happens.
Alternately, if this is an existing project, someone at the office should know how to get it working. If you try it and something breaks, let us know and we can help out!
"Hmm, what are some examples of languages with design holes, or language mechanisms that help programmers manage design holes?"
Design holes are one part of a venn diagram: They're not a part of the program's design, and yet it's possible to encounter them in the program's behavior.
If a specification document says a behavior is unspecified--I think the C spec is notorious for this--then that's where you'll find a concrete example of a design hole. The design is concrete, and any implementation is concrete, so their difference is concrete.
Personally, I see design holes whenever I want to run my program without finishing it. The unfinished parts are gaping holes.
"Lower down you suggest that an error when adding a string to a non-string is a design hole."
It's an example of a design hole if the designer doesn't care what happens.
"Any language designer would say that at least some of his errors are 'designed in'."
I'd say if the language designer really wants people to avoid a certain design hole, they can put a little fence around it. The fence is part of the design, but on the other side, there's still a hole.
"Like I said privately, I disagree that exceptions are always discouraged."
Inasmuch as they're not discouraged, they don't count as an error mechanism. This a gray area.
I know this is a slippery response to give, but this is about the way I set up my terminology, rather than the purposes I have for talking this way.
Hmm, what are some examples of languages with design holes, or language mechanisms that help programmers manage design holes? Lower down you suggest that an error when adding a string to a non-string is a design hole. The creators of Java would disagree. Any language designer would say that at least some of his errors are 'designed in'. So what's the subset of error messages you're getting at?
Downvoting of comments already exists in the code; each user is prevented from downvoting until that user accumulates enough karma. I don't have the source in front of me, but you could set the threshold for that to 1, so users would be able to downvote comments as soon as they register.
Downvoting submissions is another story. As far as I know, HN has never let users downvote submissions, so you'd have to build in that scaffolding yourself.
I've recently been pondering the role of errors in language design, and this is where I'm at right now. My train of thought takes some rickety detours into the far future and ethics, so I took a week to clarify my thinking before I submitted this here.
I privately asked for akkartik and evanrmurphy's help along the way, and they had some good responses that helped me notice the most unclear spots. I hope they'll respond again now that it's here on Arc Forum. :)
I've never done Hindley-Milner type inference before, and I just took a look at Poly. It's nice to see the solve function there, looking nice and simple like I hoped. XD It seems Algorithm W just treats the program as a graph of type equality constraints, and it does substitutions and such until it runs out of equations to process, at which point it's collected a full map of type variable bindings. I hope that makes sense.
To describe Pyret, its primary purpose is to be a teaching language, and I think its most remarkable feature on a technical basis seems to be the way it does type annotations and unit tests. I think these are just run time contracts for now, but they'll somehow pave the way for a static type system, limited by a policy that there must always be a way to understand the static type system in terms of run time behavior (https://news.ycombinator.com/item?id=6704276).
I don't see very much value (but I do see some) in a static type system if all it does is promote some run time errors to compile time. Module API enforcement and program inference are what I mainly care about as far as static types go, and Pyret doesn't seem to provide either of these. (Contracts don't quite reassure me that a client module will continue to obey the API throughout the lifetime of a multi-step interaction like a handshake or a higher-order loop. A contract violation error can occur partway through.)
If they ever weaken their policy about the type system mimicking run time checks, I think Pyret could support some special types and tests which require compile time processing if they're used. At the same time, it could still use all its expressive run time contracts. I'd really like to see this synthesis, but I don't really expect to; the implementors probably have enough to worry about just to build a static typechecker at all. :)
I wonder if Pyret's return value checks inhibit tail recursion optimization. Perhaps they can collapse into a single stack frame if they're repetitions of the same check.