Arc Forumnew | comments | leaders | submitlogin
1 point by akkartik 4817 days ago | link | parent

Interesting. On a related note, I was thinking of renaming testify to something that indicates a function is the result. Maybe fun or asfn?

Edit: just read your note on punning. I wonder if we could overload fn for this purpose.



1 point by evanrmurphy 4817 days ago | link

Hmm... so you're thinking of renaming testify to fun or asfn? I don't know if testify is general enough to earn one of those names. Aren't there other common utilities that turn expressions into functions, like thunk?

Have you considered testifn?

Update: Here are some other naming possibilities you made me think of, though I'm not sure if/where they fit in:

- fnify

- fnize

- fnk (sounds like "thunk")

To be sure, I do like your asfn, I just don't think testify is the right fit for it.

-----

1 point by akkartik 4817 days ago | link

Aren't there other common utilities that turn expressions into functions, like thunk?

Well, thunk creates a function out of code, but I'm not aware of any other function that turns data into functions.

In wart testify is now generic. If it sees a non-function by default it compares, if it sees a function it returns it, and if it sees a list it checks for membership.

-----

2 points by rocketnia 4817 days ago | link

In Penknife I've been mulling over 'testify a lot, trying to figure out whether there's a way to stop special-casing the 'fn type or to make it an everyday coercion function by introducing a 'test type or renaming it to 'fnify. I think I have the start of some answers....

Recently I posted an example utility, 'copdate, that acted exactly like 'copy but called functions on the existing values to get the replacement ones. http://arclanguage.org/item?id=13652

It would be cool if 'copy itself did that. Essentially we'd check the replacement value, and if it was a function, we'd treat it as a conversion function, and otherwise we'd just use it directly. This check could be done by a satellite utility:

  (def conversionify (x)
    (case type.x fn x [do x]))
This is just as arbitrary about the 'fn type as 'testify is, and it has just as much right (IMO) to be called 'fnify. But now we have two cousins, so we can extract their common warts into another function:

  (def abehavior (x)
    (isa x 'fn))
  
  (def testify (x)
    (check x abehavior [iso x _]))  ; not using 'is here!
  
  (def conversionify (x)
    (check x abehavior [do x]))
This 'abehavior utility is naturally extensible. Its default behavior can be nil, and each of its extensions can return t under new conditions. The question "Is this type of value supposed to be idiomatic for specifying custom behaviors for everyday Arc utilities?" isn't hard to answer. A behavior will need to have a 'defcall implementation anyway, so that's a good way to decide (but not perfect, 'cause indexable data structures use 'defcall too).

  (def abehavior (x)
    nil)
  
  (made-up-rule-definer abehavior (x) (isa x 'fn)
    t)
Also, 'testify and 'conversionify have a kinda nice common semantics: Each of them coerces to the behavior "type" based on its own idea of what a normal behavior is.

Is there a way to expose a design flaw in between 'abehavior and the places it's used? What if there's a type of value that's especially good at encoding tests or conversions, but not idiomatic for specifying callbacks? In fact, you've already mentioned one:

if it sees a list it checks for membership

I totally agree. I really like how Groovy's switch statements have that behavior. XD However, what's to say a list is an idiomatic way to give a list of options but a string isn't? Well, maybe I'd go with another function:

  (def acontainer (x)
    nil)
  
  (made-up-rule-definer acontainer (x) alist.x
    t)
  
  (def testify (x)
    [iso x _])
  
  (made-up-rule-definer testify (x) abehavior.x
    x)
  
  (made-up-rule-definer testify (x) acontainer.x
    [mem-not-using-testify _ x])
However nice (or paranoid ^^ ) this setup might be, what happens if you're looking for nil? Should nil be interpreted as an unsatisfiable condition? Well, that's probably not too bad; it's up to the utility designer to say what nil does, and we're in that realm at the moment. People will just have to say (some [iso x _] foo) instead of (some x foo) if x could be nil, even if they don't expect x to be any other kind of container or behavior.

Hopefully this rant has helped you nearly as much as it's helped me. :-p

-----

1 point by evanrmurphy 4817 days ago | link

> Well, thunk creates a function out of code, but I'm not aware of any other function that turns data into functions.

Ah, good point. I was glossing over the distinction.

-----

1 point by akkartik 4816 days ago | link

Hmm, perhaps asfn is a bad idea because it seems too similar to (as fn ..) which has different semantics. For lists and hashes it takes an arg to index with.

Perhaps I need coercions to fn to be sensitive to the number of args. With 0 args, behave like testify, with 1 arg, index, with 2 args.. who knows?

-----

1 point by rocketnia 4816 days ago | link

Perhaps I need coercions to fn to be sensitive to the number of args. With 0 args, behave like testify , with 1 arg, index, with 2 args.. who knows?

I dunno.... The coercion itself would require just as much extra information (none) for each kind of coercion. (This is a great example of the one-conversion-per-type silliness that makes me give up on 'coerce in the first place.) Also, a test is a one-argument function, just like a list or table is, so the number of arguments doesn't distinguish anything there.

-----

2 points by akkartik 4816 days ago | link

Hmm, I don't mind one conversion per type combination. Primitive functions need to make some assumptions, and yes you could provide coercion functions to print, some, etc., but that seems verbose almost all the time.

The key to me is to make coercions extensible. Then if I ever need a coercion to behave differently I can just wrap one of the types and write a new coercion.

Lisp historically suffers from the problem of having too many variants of basic stuff (http://arclanguage.org/item?id=12803). But the problem isn't that it's impossible to find the one true semantic for equality or coercion, it's that the language designers didn't put their foot down and pick one. Don't give me two different names with subtle variations, give me one strong name.

This is related to (in tension with, but not contradictory to) extensibility. Common lisp gives us three variations of equality. Different methods for string manipulation react differently to symbols: coercing symbols to strings works but (string 'a) dies. And neither equality nor coercion nor any primitives are extensible. This is exactly wrong-headed: give me one equality, make the string primitives all handle symbols uniformly -- and give me the empowering mechanisms to change the behavior when I need to.

I got rid of is from wart a few weeks ago.

-----

2 points by rocketnia 4816 days ago | link

The key to me is to make coercions extensible. Then if I ever need a coercion to behave differently I can just wrap one of the types and write a new coercion.

You could manually type-wrap your arguments when calling basic utilities, but that seems verbose almost all the time. ^_- If you're talking about putting the wrapper on the target type, like (rep:as special-coersion-wrapper x), that's still more verbose than special-coercion.x.

Something I've said for a while is that individual coercion functions like 'int and 'testify should be the norm. For every type 'coerce handles, there could be a global function that handled that type instead. The only thing this strategy seems to overlook in practice is the ability to convert a value back to its original type, and even that coercion-and-back can be less leaky as a single function (like the one I mention at http://arclanguage.org/item?id=13584).

---

Don't give me two different names with subtle variations, give me one strong name.

The difference between 'testify and 'conversionify (http://arclanguage.org/item?id=13678) isn't subtle, and neither is the difference between 'pr and 'write (or the more proper coercions [tostring:pr _] and [tostring:write _]). The purpose of a coercion isn't entirely explained by "I want something of type X." In these cases, it's "I want a Y of type X," and the Y is what's different.

Perhaps you can make a 'test type to turn 'testify into a coercion and an 'external-representation type so that 'write can be implemented in terms of 'pr. Maybe you can even replace the 'cons type with a 'list type to avoid the awkward (coerce "" 'cons) semantics, with (type nil) returning 'list. At that point I'll have fewer specific complaints. :)

On another note, I suspect in a land of polymorphism through extensibility, it makes more sense to test the type of something using an extensible predicate. If that's common, it'll probably make more sense to coerce to one of those predicates than to any specific type. Maybe 'coerce really should work like that? It could satisfy the contract that (as a b) always either errors out or returns something that satisfies the predicate a. This isn't to say I believe in this approach, but I hope it's food for thought for you. ^_^

---

I got rid of is from wart a few weeks ago.

Well, did you undefine 'eq too? Without a test for reference equality, there'll probably be gotchas when it comes to things like cyclic data structures, uninterned symbols, and weak references.

-----

2 points by akkartik 4816 days ago | link

definitely food for thought, thanks :)

  Me: "Don't give me two different names with subtle variations,
  give me one strong name."
  You: "The difference between 'testify and 'conversionify isn't subtle.."
Yeah I wasn't talking about your previous suggestions. I was talking about eq vs eql vs equal, or about string vs coerce 'string. I was talking about subtle variations in names.

Yes I still use eq. It's useful in some cases, no question, but not so often that it merits thinking about what to call it. If I were creating a runtime from scratch, I'd give it a long name, like pointer-equal or something. If I find a better use for the name I'll override it without hesitation.

Names that take up prime real estate are like defaults. Having a bunch of similar-sounding, equally memorable words that do almost the same thing is akin to having large, noisy menus of preferences. They make the language less ergonomic, they make it harder to fit into people's heads. "What does upcase do to a list again? Does it use this coerce or that one?"

If the default doesn't work for someone of course they'll make their own up. This is lisp. They're empowered. I don't need to consider that case. My job as a language designer is to be opinionated and provide strong defaults.

I'm only arguing the general case for coercions. It's possible that we need multiple kinds of coersions in specific cases, and that these need to have prime real estate as well.

-----

1 point by rocketnia 4816 days ago | link

Okay, agreed on pretty much all accounts. ^^

---

If I were creating a runtime from scratch, I'd give [is] a long name, like pointer-equal or something.

Yeah, I think it's a bit unfortunate that 'iso has a longer name than 'is. I've thought about calling them '== and '=== instead, or even '== and 'is for maximum brevity, but those are more confusing in the sense you're talking about, in that their names would be interchangeable. I can't think of a good, short name for 'iso that couldn't be mistaken for the strictest kind of equality available. "Isomorphic," "equivalent," "similar," and maybe even "congruent" could work, but "iso" is about as far as they can be abbreviated.

...Well, maybe "qv" would work. XD It has no vowels though. Thinking about Inform 7 and rkts-lang makes me strive for names that are at least pronounceable; I remember Emily Short mentioning on her blog about how a near-English language is easier for English typers to type even when it's verbose, and rkts posting here about how it's nice to be able to speak about code verbally. I think it's come up a few times in discussions about the name 'cdr too. ...And I'm digressing so much. XD;;;;

---

I'm only arguing the general case for coercions.

This is the one spot I don't know I agree with, but only because I don't know what you mean. What's this "general case" again?

-----

1 point by evanrmurphy 4816 days ago | link

How about id for is and is for iso? :)

Update: After reading over the ancestor comments more carefully, I guess this wouldn't address akkartik's concern. Nonetheless, I kind of like id.

> "Isomorphic," "equivalent," "similar," and maybe even "congruent" could work

Also keep in mind the words "alike", "same" and "exact".

-----

4 points by akkartik 4815 days ago | link

id could also mean the mathematical identity function, which arc calls idfn probably because it's a lisp-1 and we want to be able to create locals called id.

Hmm, how about if id was a low-level beast that converted any object into a say ptr type that contained its address. Now instead of (is a b) you'd say:

  (iso id.a id.b)
That seems to me about the right level of verbosity for doing the things is does that iso can't do.

-----

2 points by rocketnia 4815 days ago | link

I agree with that! :)

In Java, everything uses equals() where it can, but then it's not easy to get == behavior when it matters. Technically you can use a wrapper:

  public final class Id
  {
      private Object x;
      public Id( Object x ) { this.x = x; }
      
      public int hashCode() { return System.identityHashCode( x ); }
      public boolean equals( Object obj )
          { return obj instanceof Id && ((Id)obj).x == x; }
  }
I'm digressing, but it would be so much nicer if every object kept a hard reference to its Id wrapper. That way the Id could be used as a WeakHashMap key.[1] Weak tables are one place where comparing keys for anything but reference identity makes no sense. XD

Back in terms of (iso id.a id.b), this would have the observable effect that (iso id.id.a id.id.a) would be true for all a.

[1] Thanks to the hard reference to the Id from its x, the Id itself wouldn't be collected until its x was unreachable too. A Id without such an anchor would potentially be garbage-collected much earlier than its x, and the WeakHashMap entry would be lost. ...Anyway, some of this could be solved by changing the design of WeakHashMap. :)

-----

1 point by evanrmurphy 4815 days ago | link

I like this! Would the id of a primitive just return itself?

  id.4 ; evaluates to 4 or the address where this 4 is stored?

-----

3 points by akkartik 4815 days ago | link

I think all we need is that it be a stable value when literal objects are interned. So id.4 would probably never change, and (id "a") would change in common lisp because it creates new strings each time.

-----

1 point by rocketnia 4816 days ago | link

The name "is" is just so perfect though. What we're talking about is the difference between "the same regardless of who and why you ask" and "the same if you ask the type designer." When I see something like "is" that plainly communicates sameness, I assume it's the no-matter-what version. On the other hand, saying things are "similar" doesn't entail they're identical; they might only be identical enough.

I do like "alike." It's short, and it suggests it has something to say about non-identical arguments. It's better than "like," because (like x y) could be interpreted as "x likes y" or "cause x to like y."

-----

1 point by akkartik 4815 days ago | link

"The name 'is' is just so perfect though."

To stretch my analogy to breaking point, that's like saying I wish we could build a hundred yards into the ocean, it would make such fine waterfront property :) Some things are 'prime' but not real estate.

Ok that is probably not clear. I think I'd use is or iso, but not both.[1] iso is perfect because it is short and precise and conjures up the right image. I'll probably never find a use for is. That's ok. Good names minimize cognitive overhead, but there's still some overhead. The best name is the one you don't need, and there's enough good names that we don't have to microoptimize.

---

I'm not sure which variant is "the same regardless of who and why you ask". In common lisp there's 30 years of rationalizations why this is 'intuitive':

  * (eq "a" "a")
  nil
Yet arc chose otherwise:

  arc> (is "a" "a")
  t
So clearly it's not "the same regardless of who you ask".

If you created syntax for hash-table literals you may want this to be true:

  (is {a: 0} {a: 0})
So the semantics of is are actually quite fungible. If pointer equality is a low level beast that requires knowing internal details let's not put it on a pedestal and give it a prime name.

(Again I'm less certain than I seem.)

[1] I think isa is fine though it's so similar in form because it's different enough from iso in behavior.

-----

1 point by rocketnia 4815 days ago | link

I'm not sure which variant is "the same regardless of who and why you ask".

By "the same regardless of who and why you ask," I mean something at least as picky as "the same regardless of what algorithm you try to use to distinguish them." Notice that the pickiest equality operator in a language is the only one that can satisfy that description; any pickier one would be an algorithm (well, a black-box algorithm) that acted as a counterexample.

I think 'eqv? is this operator in standard Scheme and 'eq? is this operator in any given Scheme implementation (but I dunno, maybe it's more hackish than that in practice). Meanwhile, I think 'is acts as this operator in a "standard" subset of Arc in which strings are considered immutable. (A destructive algorithm can distinguish nonempty mutable strings.) I avoid string mutation in Arc for exactly this reason.

---

So the semantics of is are actually quite fungible.

Yeah, especially once you get down to a certain point where nothing but 'is itself would allow you to compare things, then 'is is free to be however picky it wants to be. A minimally picky 'is would solve the halting problem when applied to closures, so something has to give. :)

The least arbitrary choice is to leave out 'is altogether for certain types, like Haskell does, but I dunno, for some reason I'm willing to sacrifice things like string mutation to keep 'is around. I mean, I think it's for the sake of things like cyclic data structures, uninterned symbols, and weak references, but I don't think those things need every type to have 'is.... Interesting. I might be changing my mind on this. ^_^

-----

1 point by akkartik 4816 days ago | link

In general coerce is more useful than a bunch of specific coercion functions (say of the form srctype->desttype). Specific cases like testify may need multiple coercion functions, though.

-----

1 point by akkartik 4816 days ago | link

"Perhaps you can make a 'test type to turn 'testify into a coercion and an 'external-representation type so that 'write can be implemented in terms of 'pr. Maybe you can even replace the 'cons type with a 'list type to avoid the awkward (coerce "" 'cons) semantics, with (type nil) returning 'list."

Yeah perhaps I've been too conservative in designing wart, and I shouldn't be so afraid of proliferating types. Hmm, perhaps if there was a way to automatically coerce types as needed.. Say I wrote a function that can only handle lists, and pass it a string, the runtime dynamically coerces to string. If there's no coercion to string, it finds a coercion through lists of chars.. Hmm, I wonder if people have tried this sort of dynamic searching for paths through the coercions table. If I could do that I'd be more carefree about spawning new types. Just create a new 'test type, write a coercion to 'fn, and everything from before would continue to work.

---

You're right of course, and I was wrong: you can't disambiguate the cases just by number of args. And even if you could that is perhaps hacky.

-----

1 point by rocketnia 4816 days ago | link

Proliferating types is something I don't blame you for worrying about. Since your extension system is based on one extension per type, introducing a new type means extending all the necessary utilities again. At one point I thought that was fine, that I could just put in macros that performed several extensions at once. Now I prefer a base-extensions-on-support-for-other-extensible-things approach for Penknife, but I haven't had enough time to figure out what the pros and cons are.

I've definitely considered that path-search idea before, and I think at one point I talked about why I don't have much faith in it. The short of it is, if there are multiple coercion paths from one type to another, how do you determine which path to use? Be careful; if there's already A->B->C and I define a new type D with A->D and D->C, then code that expects A->B->C might use A->D->C and break. Technically you could propagate the transitive closure each time a coercion is defined, thereby creating the kind of stabiility needed to solve that, but I don't know if that's intuitive enough. Maybe it is. :)

-----

2 points by akkartik 4815 days ago | link

It's definitely one of the goals of wart to minimize the number of methods you have to give a new type to get all the primitives to work. sort isn't generic because the comparison operators are.[1] I think python's __names__ give us a pretty good approximation of what we need.

[1] https://github.com/akkartik/wart/blob/58f6cfd2f5a0a03d35adb5...

-----

1 point by akkartik 4816 days ago | link

You could manually type-wrap your arguments when calling basic utilities, but that seems verbose almost all the time. ^_-

And in your approach you'd have to hardcode some coercion in every primitive.

If you're talking about putting the wrapper on the target type, like (rep:as special-coersion-wrapper x), that's still more verbose than special-coercion.x.

Whether you care about that verbosity depends on how often you need special-coercion. I just don't want some rare concept taking up space in the namespace.

Ok, enough generalities in this sideshow. Focus on my other responses :)

-----

1 point by rocketnia 4816 days ago | link

And in your approach you'd have to hardcode some coercion in every primitive.

I don't follow. In Arc I never say (coerce x 'sym); I always say sym.x instead. Neither of those is more hardcoded than the other, IMO; sym.x hardcodes the 'sym global binding, while (coerce x 'sym) hardcodes the 'coerce global binding and the 'sym type symbol, but they're both ways of looking up the same separately specified behavior.

...Oh, by "primitive" do you mean a function like 'sym? I think it's just a matter of perspective whether 'sym or 'coerce is the more basic concept. IMO, 'coerce acts primarily as a way to offload some functions into a less convenient namespace (which I could do myself with a verbose naming scheme), and secondarily as a way to add an extra dimension of meaning to type tags when they're not attached to types (which strikes me as suspicious wrt extensibility...).

Hmm, I guess the 'type function itself breaks polymorphism. I wouldn't arbitrarily limit the programmer from using 'type, and I know your 'defgeneric relies on it almost intrinsically, but now I've persuaded myself a bit more that I'd rather avoid dealing in specific type tags whenever possible.

---

Ok, enough generalities in this sideshow. Focus on my other responses :)

Apparently I don't do that. <.< Speaking of which, what was the original topic again? >< (* looks it up*)

-----

1 point by akkartik 4815 days ago | link

:) Depends on how far back you go, but I choose to think it's about whether coerce is worthwhile or can be taken out.

If we can have 2 coercions to function, testify and checkify, every primitive that needs a function now must decide which one it's using. That's what I meant by hardcoding. You're trading off verbosity in setting things up for a new type (which I hope will be relatively painless) with just not being able to override certain decisions. I think I'd err toward being more configurable albeit verbose when a type needs a second coercion.

-----

1 point by rocketnia 4815 days ago | link

Uh, I only suggested 'checkify as a less confusing name for 'testify. If a language does include both, then yes, it's a bit arbitrary whether something uses one or the other. :) Orthogonality fail.

I believe the choice between 'testify and 'conversionify is very clear. There's no way 'all and friends would use 'conversionify, and there's no way my hypothetical update to 'copy would use 'testify.

---

You're trading off verbosity in setting things up for a new type (which I hope will be relatively painless) with just not being able to override certain decisions.

If you have a choice between (as test x) and (as check x), it's just as hard to override that decision.

-----

1 point by akkartik 4815 days ago | link

Yeah, but I'm encouraging my users to extend using new types :)

-----

1 point by rocketnia 4815 days ago | link

I take it that's a response to the "it's just as hard to override" part, right? If so, what do you mean? :-p

-----

1 point by akkartik 4815 days ago | link

I want to try really hard to have just one coercion per type combination, and let users create new types when they want a different conversion. You point out that's verbose, but it may not be overly so, and at least it's overrideable.

testify vs conversionify (ugh :) may be a case where the primitives themselves need multiple coercions. That's fine :) I'm just saying I'd try really, really hard to avoid it because people now have to keep track of two such things. And I'd definitely try to designate one of them the 'default' coercion like arc already does.

-----

1 point by rocketnia 4815 days ago | link

conversionify (ugh :)

Yeah, I might have called it 'coercify if not for the confusion it would cause. ;) And so far I only have one (niche) use in mind for it, so it doesn't have to be nice and brief.

Actually, it would make more sense to call it "transformify." Both "coerce" and "convert" have the connotation of at least sometimes transforming from one kind of value to another. For my purposes, the function [+ 20 _] is a valid behavior--it certainly makes sense to 'copy a data structure but add 20 to one of its parts--and yet the inputs and outputs of that transformation are of the same kind.

---

may be a case where the primitives themselves need multiple coercions.

Nah, you can resort to (as test x) and (as conversion x), with 'test and 'conversion--or 'transform--being callable types.

This is the same thing I meant when I said "you can make a 'test type to turn 'testify into a coercion."

---

I want to try really hard to have just one coercion per type combination, and let users create new types when they want a different conversion. You point out that's verbose, but it may not be overly so, and at least it's overrideable.

Feels like I'm talking past you a bit.

When users want different conversions, they can already define new "anything -> type-foo" global functions, and those are already extensible/overrideable. There's no need to add a framework for that, even if it does manage to be a tolerably simple framework with a tolerably brief API.

Code that chooses between the super-similar testify.x and checkify.x is just as doomed to being arbitrary and hardcoded as if it were to choose between (as test x) and (as check x). The 'coerce framework doesn't help with this hardcoding issue.

---

And I'd definitely try to designate one of them the 'default' coercion like arc already does.

I don't generally agree with designating one operation as default simply based on the types it manages, like "anything -> type-foo." For a more obvious example, it would be especially silly to identify a default "number x number -> number" operation.

Still, I think we agree on what's important. If the point of operation X is to reshape things to expose their fooish quality, then we don't want an operation Y whose point is also to reshape things to expose their fooish quality.

You draw the line farther toward "reshape things to expose their _____ish quality" and embrace 'coerce as a way to abstract the _____ away. As for me, I suppose I've come to believe _____ is an intrinsically subjective, second-class concept.

At one point I wanted Penknife to have a type type, with operations like list'ish.x, list'ify.x, [inherits list seq], and [new.list a b c], which could then be abstracted into other utilities like [def list my-list a b c]. It's been a while, and now I've kinda given up looking for a great advantage; I think the one plus would be having only a single global variable to attach the list documentation to, and I'm not especially excited about that. ^_^

These days I have the vague notion that a user-defined type which extends all the right things oughta be an equally pure representation of _____ish quality; the correspondence between _____s and types isn't one-to-one.

</overuse-of-metasyntactic-variables>

-----

1 point by akkartik 4815 days ago | link

"You draw the line farther toward "reshape things to expose their _____ish quality" and embrace 'coerce as a way to abstract the _____ away. As for me, I suppose I've come to believe _____ is an intrinsically subjective, second-class concept."

That's a pretty good summary. I really haven't considered that alternative at all, so keep me posted on how it goes :)

-----

1 point by rocketnia 4815 days ago | link

Lol, by "second-class concept", I mean something that isn't modeled as a value in the language. My gut reaction is that it takes zero work to pursue that alternative. :-p

But yeah, I do still need to solve the same problems 'coerce solves. My alternative, if it isn't obvious by now, is to have global functions that do (fooify x) and occasionally (transform-as-foo x (fn (fooish-x) fooish-result)).

-----

1 point by akkartik 4815 days ago | link

:) I don't mean building it. I mean using it.

coerce, ify, transform, these seem really similar to me. Why bother splitting such hairs?

I'm reminded of your earlier statement: The purpose of a coercion isn't entirely explained by "I want something of type X." In these cases, it's "I want a Y of type X," and the Y is what's different. Perhaps you didn't go far enough? If you're going to eschew coerce why not get rid of all trace of the types? Don't call it stringify. Call it serialize. Don't call it transform-to-int, call it parse-number. Don't call it aslist, call it scan. Or something like that. It isn't these names, but good name choices would bring me entirely in your camp :)

-----

1 point by rocketnia 4815 days ago | link

I mean using it.

Ah, right. ^_^ Yeah, most of the times I "use" these things, I'm just making some other abstract nonsense utility. It could be a while until I have another down-to-earth application to try this stuff on.

coerce, ify, transform, these seem really similar to me.

Well, I don't distinguish "ify" from "coerce" except when it helps tell apart the subtle variants we're talking about in this conversation. :-p The subtle variants wouldn't all go into the same language, I hope.

"Transform," on the other hand, seems more general. Really, 'transform as a type wouldn't mean anything other than "unary function, preferably pure." I could very well be splitting hairs by drawing a distinction between that context and the everyday "function" context. However, I think there are some things I want to treat differently between those cases, like ints (multiplication in function position, constant functions in transform position).

If you're going to eschew coerce why not get rid of all trace of the types?

Actually, that's how I think about it. Coercion is a technique to focus output ranges or widen input ranges so utilities are easier to use. If I consistently want a certain kind of focusing--and only then--I'll put it into a utility.

I name those utilities to correspond not with specific types but with informal target sets, usually based on the extensible utilities they support. However, any given tagged type means little except the extensions associated with it, so I think my approach has a natural tendency toward corresponding names anyway.

Don't call it [this], call it [that].

I think I agree with you there. However, I don't expect to come up with witty names as fast as I change my mind about the designs of things. :) For now I'm happy resorting to cookie-cutter names like 'fooify, 'fooish, and 't-foo, just so it's easier to notice relationships between variables and between informal concepts.

-----

1 point by akkartik 4814 days ago | link

"Coercion is a technique to focus output ranges or widen input ranges.."

That's a nice way to think about it.

"I don't expect to come up with witty names as fast as I change my mind about the designs of things. :) For now I'm happy resorting to cookie-cutter names like 'fooify, 'fooish, and 't-foo"

Yeah, makes sense.

-----