Arc Forumnew | comments | leaders | submitlogin

25 points by aidenn0 3344 days ago | link

This probably isn't the sort of thing that pg is talking about, but what I would like is a large and modern standard library.

I do most of my exploratory programing in python these days, not because I like it better than CL, but because I don't want to spend 6 hours tracking down a library that does X that is compatible with CL implementation Y I happen to be using. It seems like the only truely universal extension to CL since the ANSI spec was written is gray streams. The best case scenario is I find an asdf package, and it happens to work with the implementation I am using.

However, most of the time I'll install it on CLISP and it won't work, then try it on SBCL and it does (or vice-versa).

With Python, I just load up the module reference and spend 5 minutes finding the module that ships as part of the distribution that does what I need.

CL doesn't even have a standard way to connect to a UDP socket. That's a minor thing in and of itself, but I've never tried to prototype any significant program and not run into something like this.

I think the creators of Arc get this, since one of the things included is a webserver. Also, having a central clearinghouse for the language and having the implementation be the documentation both lend themselves well to allowing a large standard library to grow, so I am very hopeful.

After all, Python did not have any sort of impressive library in 1991 when Guido first publicly posted it (it did have a module system though hint-hint).


5 points by tel 3344 days ago | link

Seconding on the module system. It may not be the biggest concern of a young language, but it's a big part of the infrastructure that will let the language grow.


3 points by marvin 3343 days ago | link

And don't forget good documentation, preferably built-in. Whenever I am exploring something I haven't done before, it's a great boon not having to read the sources, or even look it up on the web. Python's help() is wonderful.


1 point by yters 3343 days ago | link

Since MzScheme can call out to C/C++ does that solve the lack of libraries?


10 points by parenthesis 3344 days ago | link

I, for one, endorse your approach of getting the foundations right before building towers atop.

I venture in explanation of the character set controversy: IIRC You have written of wanting Arc to be good specifically for web programming. (And obviously you are using it thus.) Ascii is fine when working on code to do symbolic differentiation (as I believe McCarthy was interested in at the dawns of time). But for a web app for the Chinese market, say, Unicode is obviously going to be involved. I think perhaps people were just expecting 'new web-app language' to entail Unicode support.


7 points by metageek 3344 days ago | link

"I think perhaps people were just expecting 'new web-app language' to entail Unicode support." -- I would go further: I would say that many of us consider Unicode support an essential; it's part of getting the foundations right. PG mentioned hearing/reading Guido talk about the pain of switching Python's character support--but what was painful was the switching, not the character sets. The sooner Arc makes that switch, the less pain it'll be.


4 points by lg 3344 days ago | link

I think he said it was painful for Guido because he had to worry about backwards-compatibility...and that won't be an issue for Arc.


3 points by metageek 3343 days ago | link

It might. PG has decided not to worry about the pain of backwards compatibility for other people's code; but he still has to consider his own code.


1 point by Gotttzsche 3344 days ago | link

He said that, but didn't they decide to break the backwards-compatibility with Python3000?


4 points by arkanes 3343 days ago | link

Releasing a language which only supports ascii is a travesty today. If what you really meant was that characters would be opaque streams of bytes, you should have said that - the fact that you didn't indicates that you don't really understand the issues involved. If that is why you didn't write any unicode support, you should have said that, too.

But you didn't just say you didn't have any support for it, or that you didn't understand the issues and needed to figure out what best to do. You were dismissive in the extreme of the entire idea - "I don't want to spend even a day on character sets" - and for someone who pretends to be working on a new language for web development this is such a monumental lack of judgment and information that it taints the entire language. The issue with HTML tables is similar, but to a lesser degree.

If this were some college kids half-baked homework Lisp implementation nobody would give a damn. But you've been preaching on Lisp forever, and you've been talking about Arc for years, and you've talked a lot about the next 100 year language. This is what we're supposed to look at for our revolution in computing? Even Visual Basic has high quality unicode support.

Making your HTML libraries dump stuff as tables is just silly, too. It's not egregious, like saying that unicode doesn't matter, but come on now. It's not 1978 anymore. There's an expectation that you have spectacularly failed to live up to.

Also, the fact that forum tells you to create an account after you've written your comment and submitted it? Extremely crappy.


3 points by lupisak 3343 days ago | link

Sorry to break it to you, but you have totally missed the point. Arc is not finished, it is still an experimental language. There will be unicode suppport, but there are many core issues yet to be resolved. You shouldn't use Arc for actual projects right now anyway, as Paul already warned that future changes not only can, but will break old code. So this is not the time to worry about Unicode. It's a bit like building a house and worrying about the curtains while you're still stacking bricks...


2 points by D_T 3343 days ago | link

Wow Arkanes. You must be really mad. I wonder if you are one of those people who is angry because PG beat you to it?



7 points by earthboundkid 3344 days ago | link

You're confusing two things together: the kind of language that is good for creating "quick and dirty" hacks, and the kind of language that one can produce through "quick and dirty" hacks. These two are not the same. Python, for example, is the Q&D hacking language of choice for many people today, but it is not itself quick or dirty at all. That's why people use it. They love the deep library and gestalt of consistent programming UI. Those things sound simple, but as it turns out simple is harder than complex. It's easy to make something complex. Look at PHP. It's hard to make something simple. Look at Apple's products.

The goal for Arc needs to be clarified. Is it, "There oughta be a language for PG to write dirty hacks in?" or "There oughta be a language for PG to write using dirty hacks?" The Unicode decision seems like a clear indication that the latter is the case, and people who were expecting the former are unhappy about it.


12 points by zachbeane 3344 days ago | link

Arc's version of CL's (incf (gethash k table 0)) is gross.


5 points by randallsquared 3344 days ago | link

Seems like it wouldn't be hard to add defaults to the hash referencing (and string referencing), to allow (++ (table k 0)) which seems much nicer than the CL version, in my opinion.

Edit: Actually, not string and array referencing, since defaults don't matter for that, and there's a much more useful meaning for that: slices.


6 points by pg 3344 days ago | link

I might do it that way, but it seems cleaner to do it when hash tables are created.


5 points by randallsquared 3344 days ago | link

Only if every key is a similar type. The default that's most appropriate is often peculiar to how you're using the value or location at the use site.


2 points by lg 3344 days ago | link

speaking of hash tables, I remember in ACL you explained why CL has two return values for gethash, to differentiate between the nil meaning "X is stored as nil in my table" and the nil meaning "X is not stored in my table". So why not in Arc?


2 points by pg 3343 days ago | link

Because it turns out that in practice it's rarely to never an issue. If you have nil as a val in a hash table, you usually don't care whether that's because it was never set, or explicitly set to nil.


1 point by dr_drake 3338 days ago | link

Dear Paul, I can not believe you would make such a statement. Either you're living in a vastly different programming universe than the one I am living in, or you really haven't done that much programming at all. In any case, there are many situations where one stores types of values that may include nil in a hash table, and in most of these there is a very significant difference between 'value is nil' and 'value is not stored'. I understand that Arc isn't trying to all 'enterprisey', but these are fundamental concepts that, I thought, only complete amateurs did not understand. Sincerely, Dr. Drake


2 points by pg 3338 days ago | link

You know, I do actually understand the difference between the two cases. What I'm saying is that in my experience hash tables that actually need to contain nil as a value are many times less common than those that don't.

In situations where the values you're storing might be nil, you just enclose all the values in lists.

My goal in Arc is to have elegant solutions for the cases that actually happen, at the expense of elegance in solutions for rare edge cases.


1 point by mschw 3340 days ago | link

Python's defaultdict takes a factory function at construction time.


12 points by pg 3344 days ago | link

That's true. Fixing that is one of the top priorities.


3 points by ryantmulligan 3344 days ago | link

Could you please elaborate on what Arc's version is? Personally I don't even understand what your CL code is doing, being a Lisp Newb.


5 points by jimbokun 3344 days ago | link

(incf (gethash k table 0))

table is a hash table, k is a key, gethash returns the value in table for k or 0 if no value for k is found. Think of incf as ++. It will increment the value by 1 and set that as the value for k in table.


7 points by pg 3344 days ago | link

And the problem with Arc is that currently the default value for an entry in a hash table is nil, rather than zero. If h is a hash table and you know (h 'foo) is 1, you can safely say

  (++ (h 'foo))  
But if you don't know whether (h 'foo) has a value yet you have to check explicitly:

  (= (h 'foo) (+ 1 (or (h 'foo) 0)))


1 point by metageek 3344 days ago | link

How about if <code>h</code> takes an optional second argument, which is the default, and the macros are smart enough that you can do <code>(++ (h 'foo 0))</code>?


1 point by greatness 3340 days ago | link

I agree, this is probably the best solution.


1 point by reitzensteinm 3344 days ago | link

Perhaps (++ containsnil) should result in 1 anyway? Is there any case where that would break anything?


2 points by simonb 3344 days ago | link

For one it breaks the expectation of a strongly typed language.

If something goes wrong and you want to fail as soon as possible not propagate the defect through the system.


1 point by reitzensteinm 3343 days ago | link

Oh, it definitely throws strong typing right out of the window.

The reason I suggested it is because it would seem that almost all of the time where you go to do an increment on a nil value, you're working with an uninitialized element (not necessarily in a hash map) and treating that as 0 (as you're doing an increment) would in a certain sense be reasonable behaviour.

But I guess you're right, in the case where nil does represent an error, it'll be two steps backwards when you go to debug the thing.


1 point by william42 3337 days ago | link

Or perhaps just set containsnil to 0 when you do that. (Knowing pg, this would probably work.)


1 point by Tichy 3344 days ago | link

What is the usage scenario for that? I have never written such a code (incrementing values in hashtables) - maybe it is more common in LISP?


1 point by bOR_ 3344 days ago | link

Happens when you want to categorize the frequency of items in a list, and I've been doing that all the time (categorizing gene frequencies in an agent-based model).

In ruby I'd extend the array class with this code

  class Array
    def categorize
      hash =
      self.each {|item| hash[item] += 1}
      return hash
although the other day I saw someone achieve the same thing using a hack on inject (the `; hash' part is only there because inject demands that, the work is done earlier.)

  array.inject( {|hash,key| hash[key] += 1 ; hash}
Noticing that lisp / arc is more concise indeed. I'll have fun learning it.


1 point by smallpaul 3341 days ago | link

Why would you extend rather than subclass the Array class? It kind of confirms all of my worst fears about Ruby's too-easy class reopening. (what happens when someone else defines an Array method called "categorize" for a totally unrelated purposes?)

I think that the Python syntax for this is

h[x] = h.get(x, 0) + 1

It isn't quite as concise as the Common Lisp but more so than Arc. I'd be curious to see what the Common Lisp looks like if you are doing something more complicated than an in-place increment. E.g. the equivalent of:

h[x] = h.get(x, 1) * 2


2 points by bOR_ 3338 days ago | link

I'm a phd, working alone on projects, and the scripts I write a generally < 300 lines + 6 functions from a library I wrote. The agent-based models i write are ~ 200 lines, no libraries.

For me there's not much risk in redefining things.


1 point by jsg 3340 days ago | link

(setf (gethash x h) (* (gethash x h 1) 2))


1 point by ijoshua 3341 days ago | link

A hashtable containing integer values is a common implementation for the collection data structure known as a Bag or Counted Set. The value indicates how many instances of the key appear in the collection. Incrementing the value would be equivalent to adding a member instance. Giving a zero default is a shortcut to avoid having to check for membership.


8 points by badeyes 3344 days ago | link

Speaking of low-key, what's with the grey on beige look? Don't painters care about contrast?


9 points by pg 3344 days ago | link

I should change it. It's from News.YC, which is mostly shorter posts.


6 points by weeble 3344 days ago | link

I think the phrasing of the announcement may be a large part of the problem. Not doing unicode is about the only concrete thing it says about Arc. People read it expecting to find out why Arc is going to be great. And it didn't really say, except for talking about the principles of conciseness and power. Even the tutorial doesn't say "To allow the writing of concise, powerful programs, Arc introduces features X, Y and Z." It just says "Arc (and lots of other Lisps) have features A, B, C, D, E, F..."

People don't grasp abstract principles well, and even when they do, they don't trust you to mean what they think you mean until they see some concrete evidence. Your problem is that the "unicode admission" and the stuff about the HTML library are the only solid statements in the announcement about what they can expect to find in Arc. That's what they latch on to instead of a vague promise that Arc will let them write shorter programs.


7 points by brent 3344 days ago | link

I find it utterly hilarious that people @ news.yc complain to no end that the site has non-"hacker" stuff. Then when an interesting open source project is released they complain that it doesn't have such and such a feature that they need. So are they hackers or not? What gives?

> stop whining and start hacking


9 points by comatose_kid 3344 days ago | link

While I agree with your sentiment, it is possible that the subset of people who complain about 'non hacker' stuff on news.yc doesn't intersect with the subset of people who complain about some feature that Arc lacks....


2 points by earthboundkid 3344 days ago | link

"Less slackin'; more hackin'."


7 points by gregwebs 3344 days ago | link

I think this is partly just a communication problem- Dale Carnegie would advise to take a different tone. Instead of saying "politically correct", something more apologetic would might better- "I didn't have time yet- of course a better character set will be supported in the future"


3 points by aaco 3343 days ago | link

I fail to see where Arc doesn't support Unicode, since it seems to me that Arc is just using MzScheme strings, which are just Unicode strings.

Can someone explain this to me?

Some examples:

  ;&#9696; is a 2 bytes Unicode char, but I guess it's escaped in this forum, so replace it with the correct character when testing.
  arc> (len "a&#9696;b") ; Unicode
  arc> (len "axb") ; ascii
  arc> (coerce #\&#9696; 'int) ; Unicode
  arc> (coerce #\x 'int)  ; ascii
  arc> (subseq "a&#9696;b" 1 2) ; Unicode
  arc> (subseq "axb" 1 2)  ; ascii
Where does Arc don't support Unicode?!


3 points by olavk 3343 days ago | link

That just shows how agile PG is. He added unicode support the minute he saw people request it! :)

Seriously, PG explicitly claims that Arc intentionally doesn't support anything but ASCII (, so that might be why people (including me) believed that to be the case.


1 point by aaco 3343 days ago | link

Yes, I think Arc intentionally supports only ASCII just to not bother with Unicode issues as of right now.

Anyway, I can't see how Unicode can break in Arc. I'm not a Lisper, but I think you can't extract 1 byte from an Arc string (since it's just a MzScheme string), but 1 char instead. That's a different concept, because in Unicode 1 char can be formed with 1, 2 or more bytes.


2 points by bobbane 3343 days ago | link

Watch out - that's single-portable-implementation thinking. When Paul puts out another release of Arc based on, say, another Scheme implementation, or SBCL, those tricks won't work.


2 points by kennytilton 3344 days ago | link

Complaints take on a life of their own, If it was not Unicode it would be something else. But we do have to complain...don't we? Mine is the absence of dynamic scope (hoping I missed it). Meanwhile, omigod, your prime directive is brevity?! I worked on the same floor with the K guys at UBS, having flashbacks. That and speed was all they could talk about. :)


3 points by Huvet 3322 days ago | link

The problem is not that you didn't support unicode, it was that you said unicode isn't important. Say you're sorry (it is important, I even have a รถ-character in my name) and go back to hacking.


8 points by tree 3344 days ago | link

If you don't deal with character sets up front then you too will spend a year getting Unicode (or whatever) right when the time comes, just like GvR (and others) did with Python.

Not using Arc because it lacks Unicode support is a bit silly, but it can be a show stopper: one of the reasons I never used Ruby and stayed with Python was that I needed support for non-Scripts.

Retrofitting this into a language is hard.


1 point by willchang 3344 days ago | link

GvR took a year because he didn't want to break old code. PG appears to have no compunctions about doing such a thing. And not only does that makes perfect sense, but he also warned us. The moral of the story is, don't write a million-line application in Arc just yet.


2 points by tree 3343 days ago | link

Then to me this makes the whole thing a non-starter, unfortunately, because no one will want to write any non-trivial program in a language that could (will, by the creators declaration!) change in incompatible ways in the future.

One example: generic collections in Java 5. Sun went out of their way to make ensure compatibility with pre-generic collections, giving us type-erasure. Bletcherousness in the sake of backwards compatibility.

Characters are such a fundamental part of a modern, general purpose, computer language that it seems short-sighted not to allow for dealing with the issue up front.

Honestly, though, it is early enough in the game that if people wanted to hash out the specification for Unicode support in Arc, it could be done. MzScheme characters are Unicode, aren't they? Build the definition on that foundation.


2 points by lojic 3343 days ago | link

Non-starter for your next production app maybe, but not a non-starter to code enough Arc to see how it compares to your other favorite languages so you can submit suggestions for improvement. If this mode of operation makes it easier to change the language for the better, I'm all for it :)

Eventually, backward compatibility will be very important, but having that too early just kills momentum IMO.


2 points by blobi 3344 days ago | link

I saw you implemented function composition as what seems like 'symbol hack'. I don't know what to think about it... But in the same vein, I thought it could be possible to do the same for gensyms in macros. Something like prefixing hygienic macro variables with @

(mac n-of (n expr) `(let @ga nil (repeat ,n (push ,expr @ga)) (rev @ga)))


1 point by randallsquared 3343 days ago | link

I like this idea (and the character you chose: @), but as zunz points out in , it would be convenient in some cases to have all symbols resolve at the macro definition site, rather than at the use site, except for those specified otherwise (for instance 'it' in the aif definition). If I understand correctly, this is all that hygienic macros require.

Hygiene does have the drawback that you can't bugfix a function used in a macro definition and have it just work, and that might well be more important in a language which is so young. I only mention this because I just did this, and if it hadn't worked I'd have been surprised. But that could be because most of my lisp experience is CL, rather than Scheme.


8 points by far33d 3344 days ago | link

I lost a lot of karma for being nonchalant about char sets. I didn't realize how much people cared. Oh well.


26 points by pg 3344 days ago | link

Me too...


1 point by Xichekolas 3344 days ago | link

At least you got all that negative karma out of the way when you were already starting around 1. All that downvoting just pounded you down to 1 again... not a very long fall.


3 points by olavk 3344 days ago | link

Thank you very much for the clarification! People got riled up because it sounded like you didn't want Arc to support unicode ever (or that the current support would be removed). As long as the language is in flux its not a problem.

However, fundamental Unicode support probably has to be in place before release 1.0. It will be painful to add at a later time if backwards compatibility is an issue. For example a lot of string processing code might assume that accessing characters by index is constant time. If the internal representation is changed to eg. UTF-8 this might lead to performance issues. On the other hand, if code assumes that strings are equivalent to byte-arrays, it might lead to trouble if they are changed to arrays of 32bit-values.

I believe the simplest solution is to just have characters be 32bit integers. The internal representation of a string is just an array of 32bit characters. Sure this consumes more space, but who cares? As long as strings are a type seperate from byte-arrays, encoding/decoding issues and can be handled in libraries.


1 point by weeble 3344 days ago | link

I think the point is that, in the presence of combining diacritics, even 32 bits isn't enough. A character is (roughly) one "base" 32-bit code plus zero or more "combining" 32-bit codes. And equality between two characters isn't purely structural - you might re-order its combining codes or use a pre-combined code. (Not all combinations have pre-combined codes.)

I will point out that I know very little about Unicode, so I might be a bit off. I can't say that I'm even very interested in the whole Unicode debate, so long as it all gets sorted out at some point in the future.


1 point by tree 3343 days ago | link

The only reason Unicode contains combined forms is for compatibility with existing standards: you cannot invent new code points representing a novel combination of base and combining characters. The Unicode normalization forms deal with these issues.

Unicode support is a complex issue: fundamentally there are the issues of low-level character representation (e.g., internal representation) followed by library support to handle normalization and higher-level text processing operations.


1 point by olavk 3344 days ago | link

True, I should have said unicode code points rather than characters. I believe the fundamentals is that strings should always be sequences of unicode code points, and shouldn't be conflated with byte arrays. The thorny issues of normalization, comparing, sorting, rendering combined characters and so on could be handled with libraries at a later stage.


2 points by zachbeane 3344 days ago | link

The inability to memoize functions that might return nil is annoying. Memoizing is important.


1 point by randallsquared 3343 days ago | link


    (def memo (f)
      (let cache (table)
        (fn args
          (aif (cache args)
              (cadr it)
              (cadr (= (cache args) (list t (apply f args))))))))


2 points by zachbeane 3343 days ago | link

A less gross solution is available if you can ask if a key is present in a table.


1 point by randallsquared 3343 days ago | link

We differ on whether that would be less gross. :)


2 points by zachbeane 3343 days ago | link

If there was an "intable" function, you could do it in 11 forms (20 symbols), but your solution requires 13 forms, or 22 symbols. The shorter solution would be less gross, by definition!

  (def memo (f)
    (let cache (table)
      (fn args
	(if (intable cache args)
	    (table args)
	    (= (table args) (apply f args))))))


1 point by randallsquared 3343 days ago | link

If intable could be made to return the actual value in the table, I'd have no reservations, since you could

    (aif (intable cache args)
         (thevalue it) ; where the fn thevalue does cadr
                       ; or whatever it needs to do
         (= (table args) (apply f args)))
Scanning over the entire table twice, once to see if the key is there, and once to actually get the value, is what seemed gross to me about checking for the key first. No doubt that could be optimized by having tables keep a list of keys separately, but that seemed like a heavyweight fix.

I agree that your version is prettier on the surface, though.


2 points by misuba 3343 days ago | link

It goes to show how much prose writing matters when launching a new language. If you hadn't written a paragraph about character sets in the announcement, nobody would have given you any grief about it. People will focus on what you focus on.


5 points by icey 3344 days ago | link

This is exactly what needed to be said about the Arc release, thank you!


3 points by danja 3343 days ago | link

The current position seems like "well, you can write exploratory programs using only the numbers 0 to 7, so clearly they're the first priority".

String handling is an important feature of any programming language, and strings are series of characters. Doesn't that suggest character sets are pretty core?

I'd suggest the main reason character handling is currently such an enormous time suck for developers is that many of the languages in common use were developed before Unicode came along. If you want to design a language that will be an enormous time suck in_use now, so be it.

Supporting short programs is great - but supporting short programs that will actually work is much better. A solution that is special-cased to a limited range of values is not a reusable solution.


2 points by Dimitri 3341 days ago | link

Come on danja, your "..only numbers 0 to 7" comment is totally off the mark and you know it.


2 points by thelem 3339 days ago | link

Try to imagine a modern language without support for capital letters, or a language that only supported 4 of the normal vowels. Sure, you'd be able to get by, but it would be a major nuisance factor. That is how important basic non-ascii characters like accents and es-zettes are to western European languages. As you go further East things just get worse, until you get to languages like Chinese which bear no resemblance to ASCII.

So yes, unicode might seem like an unimportant feature to you, it makes a huge difference to other cultures.

You are of course free to work on the features that interest you, just don't try and pretend that Unicode isn't an important feature just because you don't want to work on it.


4 points by no 3344 days ago | link

Okay pg, it's time to call you out. Unicode support is not trivial, like you make it out to be, and it's not a waste of time. It's a critical piece of infrastructure for any runtime. You fail.


5 points by pg 3330 days ago | link

I'm glad this is preserved for posterity, since it did turn out to be trivial and in fact got added with about -2 lines of code a few days after this comment was posted...


4 points by maxwell 3344 days ago | link

So, how would you implement it?


6 points by kirkeby 3344 days ago | link

I think the Py3K solution sounds right: Two different types of strings, 8-bit byte-strings and full unicode strings, with encode and decode functions to convert between the two.

Without this strict separation you get the wart that is Pythons current string-support.


9 points by olavk 3344 days ago | link

Or just one type of string: unicode character strings (which is sequences of unicode code points). Then a seperate type for byte arrays. Byte arrays are not character strings, but can easily be translated into a string (and back).


3 points by olavk 3343 days ago | link

...and this seem to be exactly what MzScheme provides :-) Strings in MzScheme are sequences of unicode code points. "bytes" is a seperate type which is a sequence of bytes. There are functions to translate between the two, given an encoding.

Python 3000 is close to this, but I think Python muddles the issue by providing character-releated operations like "capitalize" and so on on byte arrays. This is bound to lead to confusion. (The reason seem to be that the byte array is really the old 8-bit string type renamed. Will it never go away?) MzScheme does not have that issue.


2 points by eusman 3344 days ago | link

Most people seem to program applications and not algorithms, which seems to be tha case, otherwise they would recognize the spirit/intention of Arc to actually make writting programs shorter.

I suppose this is not in the priorities of other people, whereas building an application it is, therefore their need for Unicode. But I suppose this is covered by the fact as stated by you Paul, that Arc is not for everyone.

It would be interesting to see perfomance charts and implementation of data structures to see any benefits.


2 points by lvecsey 3344 days ago | link

As arc develops, it might be worthwhile to implement subsets of it in other languages for comparison. Then some of the sample applications can be written across all these alternatives. At some point there might be a few examples or an implementation point where its infeasible (i.e., very painful) to use anything but Arc itself.


5 points by kennytilton 3344 days ago | link

Great idea, a port to CL (where I have my best IDE). I want to do a rather intense Arc library, not sure how much cut/paste I will survive. Gonna have to quit the day job...


2 points by nandosperiperi 3344 days ago | link

Someone has already started on an implementation in Haskell


1 point by ehird 3342 days ago | link

it was a parody, to show that in a few days we could do what took 6 years.


2 points by latin 3317 days ago | link

That's so funny. People want use your new language because you are an smart guy, but the same people can accept your ideas and procedures. I think if people can't wait you finish the language, then don't use Arc (sorry for my English).


1 point by simono 3177 days ago | link


Good advice, latin. I won't use it.


6 points by vikram 3344 days ago | link

I think PG needs to release code for this website, so that people can see that Arc is a good language to write software. Without a real world example speculation is all people have to go on.


2 points by rapp 3344 days ago | link

Arc is a powerful gesture to the ratification of the very concept of language design. Simply stated the language is in our hands to insure its evolution.


2 points by mayson 3344 days ago | link

Re my previous:

arc> (prn "u") u "u" arc> . Those u's all have umlauts, until the blog software gets hold of them.


8 points by nathanpbell 3344 days ago | link

And the blogging software is written in Arc. QED. ;o)


1 point by anon 3344 days ago | link

I hope the non-7bit-ASCII characters are banned from current implementation. Otherwise, some flames are probably warranted.


1 point by mayson 3344 days ago | link

Has anyone else tried to use Unicode character in Arc?

arc> (prn "u") u "u" arc> .



0 points by hackphrek 3004 days ago | link