Arc Forumnew | comments | leaders | submit | akkartik's commentslogin

My experience: https://lobste.rs/s/v17gol/switching_sublime_text_from_vim_2...

reply

2 points by hjek 17 days ago | link

Nice write-up.

I think regardless of whether it's really an editor to use on daily basis, it's quite interesting to see any new take on editor keyboard interaction.

reply

4 points by akkartik 80 days ago | link | parent | on: List comprehensions in Arc

musk_fan, your initial attempt inspired me to build on it :) Now that we can enumerate from a start number to an end, I hanker after something more comprehensive. Also, after reading malisper's description of iterate at http://malisper.me/loops-in-lisp-part-3-iterate, I thought I'd try to mimic its syntax, in hopes that it'll fit better with a Lisp and be extensible.

Here's what I ended up with: http://akkartik.name/post/list-comprehensions-in-anarki. Is the description clear?

-----

2 points by akkartik 54 days ago | link

List comprehensions are now in Anarki: https://github.com/arclanguage/anarki/commit/b7155e56a6

reply

4 points by akkartik 84 days ago | link | parent | on: List comprehensions in Arc

Yes, Arc doesn't come with list comprehensions. But it sounds like a fun exercise to build say a collect macro:

    (collect x for x in xs if odd.x)
I think it may be similar to the loop macro in Common Lisp. There's a Common Lisp implementation of loop at https://www.cs.cmu.edu/afs/cs/project/ai-repository/ai/lang/... (via http://malisper.me/loops-in-lisp-part-2-loop by malisper)

-----

3 points by musk_fan 80 days ago | link

I wrote a simple version that only works like so (l 0 9) == '(0 1 2 3 4 5 6 7 8 9):

     (def l (start end)
       (let lis `(1)
         (pop lis)
         (while (<= start end)
           (= lis `(,@lis ,start))
           (++ start))
         lis))
The (collect ...) structure is really cool; I'd forgotten about that; it's been awhile since I touched CLisp

-----

2 points by akkartik 80 days ago | link

Great! A few comments:

1. Is there a reason you start with `(1) and then pop? Why not just initialize lis to nil?

2. Here's a slightly more idiomatic implementation (also more efficient, because it avoids deconstructing and reconstructing lis on every iteration):

    (def l (start end)
      (accum acc
        (for i start (<= i end) ++.i
          acc.i)))
Read more about accum and for:

    arc> (help accum)
    arc> (help for)
(I'm assuming Anarki in the above example. The Arc version is slightly different.)

-----


Wow, perhaps I should become an APL programmer. (My links below are probably familiar to everyone here, so forgive the shameless replugs. I'm mostly just working through commonalities for my own self.)

"[There is] a sharp contrast between Subordination of Detail and Abstraction as the term is commonly used in Computer Science. Iverson’s notion of subordination is the elimination of notational obligations through the use of generalization, systematic extension, and implicit guarantees. The usual notion of abstraction is the means by which 'API' barriers may be introduced to implement conceptual frameworks that suppress underlying implementation considerations in order to allow a black box reasoning at a different, non-native abstraction level."

I'm very partial to this point, but confusingly I've been calling Iverson's notion of subordination of detail "abstraction", and Iverson's notion of abstraction "service" [1] or "division of labor" [2]. Though lately I try to avoid the term "abstraction" entirely. That seems on the right track.

Regardless of terminology, this is a critical distinction.

[1] http://akkartik.name/post/libraries [2] http://akkartik.name/post/libraries2

---

"Common practice encourages decomposing a problem into very small components, and most programming languages emphasize a clear picture of a small piece of code in isolation. APL emphasizes finding ways of expressing solutions around the macro-view of the problem."

Compare http://akkartik.name/post/readable-bad

---

Quoting Knuth: *"I also must confess to a strong bias against the fashion for reusable code. To me, re-editable code is much, much better than an untouchable black box or toolkit."

Compare me: "Watch out for the warm fuzzies triggered by the word 'reuse'. A world of reuse is a world of promiscuity, with pieces of code connecting up wantonly with each other. Division of labor is a relationship not to be gotten into lightly. It requires knowing what guarantees you need, and what guarantees the counterparty provides. And you can't know what guarantees you need from a subsystem you don't understand."

---

OMG, he's talking about "big picture" and "optimizing for the rewrite" I call the latter rewrite-friendliness at http://akkartik.name/about. And "big picture" is in the navbar on the right of my side.

---

Ok, that's enough replugging. The most tantalizing idea for me here is to try to get structure to do some of the heavy lifting that names do in more conventional languages.

As a final note, this link somebody showed me today seems really relevant: https://iaminterface.com/cognitive-artifacts-complementary-c...

-----

3 points by jsgrahamusarc 107 days ago | link

Good to see that your minds have been moving along the same tracks.

-----

4 points by akkartik 118 days ago | link | parent | on: Quitting the arc server

Not a dumb question at all; there may well be something broken here.

It'll be a few hours before I can try running it, but one possibility: perhaps it will quit after serving one further request? Looking at the code (https://github.com/arclanguage/anarki/blob/3a07f946f9/lib/sr...) I think it's waiting on serve-socket before it gets around to checking quitsrv again. Can you try that if you haven't already?

-----

4 points by noobie 117 days ago | link

Thank you for your help!! Could you help me understand what you'd like me to do?

I'm not sure what you think I should try.

After looking up the definition of the 'until' macro, the line of code referenced seems to tell me: if quitsrv* is not nil, then continue to serve-socket, which looks to me, at a noobie's glance, as returning the function "accept-request-with-deadline", opening up threads to serve the request?

i.e., quitsrv* now returns t. According to this definition, doesn't that mean that serve-socket should stop?

And incidentally, when I run more defop macro calls, it returns the 'procedure' but (asv) doesn't work; none of the new page on localhost show up (instead it is "Unknown"). I'll look more into it but not really sure how to proceed after glancing at (def asv).

I've copied down the definitions for while and whilet from arc.arc but they don't seem helpful at first glance. Will look further into them if you think it would be useful.

Thank you!!

-----

4 points by akkartik 117 days ago | link

Not at all. I meant that if your server doesn't serve much traffic it'll spend much of its time inside serve-socket blocked on a new connection. It's only after serving a connection that it'll loop around to check quitsrv.

I just ran an experiment, and indeed the server stops after one more request:

    $ ./run-news
    initializing arc..
    ready to serve port 8080
    arc> (= quitsrv* t)
    ; now browse to localhost:8080; page served
    quit server ; printed by *serve*
    ; hit reload on localhost:8080; no response

-----

3 points by akkartik 125 days ago | link | parent | on: Yet another post regarding Arc & Heroku

I don't really know Heroku, sorry, but to answer your final question: It was a while ago, but I used to host an Arc webapp on a VPS behind Apache/Nginx.

-----

3 points by matthewwiese 124 days ago | link

Cool, thanks for the reply :) I've seen it discussed elsewhere that the way to have SSL (at least with the News example in the repository) is to run through Nginx.

Did that complicate your setup by much?

-----

5 points by shader 124 days ago | link

I've recently discovered the Caddy server (https://caddyserver.com/), which makes SSL and application proxy deployments super easy. Like, 2 lines of config:

  domain.com               #uses the domain to automatically set up SSL with Let's Encrypt
  proxy / localhost:8080   #redirect everything to Arc on 8080
I will say that Arc runs a bit resource intensive, and the slightly slow boot times mean you don't want it to have to re-launch because of infrequent requests. I don't know how well it would work on Heroku.

Also, some VPS services like vultr.com offer $5/mo nodes that have more resources than what you get from Heroku at $7/mo anyway.

-----

4 points by shader 124 days ago | link

I should mention that if what you want from Heroku is their deployment process, you can actually replicate some of it pretty easily with Caddy (though I have not done so yet myself; I plan to soon...)

Specifically, they have support for automatically fetching code from git and running a command in the repo, either periodically or triggered by a webhook: https://caddyserver.com/docs/http.git

That doesn't get you the app ecosystem that heroku offers, but you can get a lot of that pretty easily via docker and docker-compose now.

-----


Hmm, I don't think pg has abandoned Arc. And I don't think maintaining compatibility is a concern even so. So far Arc has made no compatibility guarantees. We're all forking pre-alpha software. So it's conceivable the hundred-year language will have features from Anarki. Forks that restrict where they get good ideas from will be outcompeted by forks that don't.

Though lately I consider it more likely that a hundred-year language will have a leveled-up conception of "compatiblity", one that assumes orders of magnitude more forking activity. (See "the Zen of Mu" at the bottom of http://akkartik.github.io/mu; Mu is my attempt at building out the foundations for a hundred-year stack. With, perchance, something like Arc on top.) Perhaps Arc shouldn't be a single language but a family of forks (https://lobste.rs/s/n0d3qo#c_rue8pf). Not a single tree but a clonal colony (https://en.wikipedia.org/wiki/List_of_longest-living_organis...). That'll only work if we can make it easy for superficially-incompatible forks to exchange features and functionality back and forth. Which is an unsolved problem so far. So we may well be very far from a hundred-year language.

Anyways, this tangent is my contribution. I don't have a short-term answer for how to solve Arc's users-vs-libraries chicken-and-egg problem ^_^

-----

4 points by shader 133 days ago | link

> I don't have a short-term answer for how to solve Arc's users-vs-libraries chicken-and-egg problem ^_^

Be the change you want to see in the world... I'm not really sure how to motivate the community, but I am rather attached to it, even if I have been a very infrequent lurker over the past years. We have had a relatively high amount of discussion over the past few days though...

I don't think we can just expect to flip a switch and suddenly get a community; we have to /be/ a community, and then people might be willing to join us.

-----

3 points by shader 133 days ago | link

Maybe we should emphasize the flexibility of our language designs by making the language itself more modular. It would be challenging from a compatibility/dependency standpoint, but those are problems we might have to solve anyway. It would help to have better isolation of components.

-----

2 points by hjek 133 days ago | link

I agree.

Some newbie friendly documentation to ns.arc would be great, or perhaps some very simple examples. Have you tried using ns.arc?

https://github.com/arclanguage/anarki/blob/master/lib/ns.arc

-----

2 points by rocketnia 129 days ago | link

It looks like I might've subtly broken ns.arc with my own changes to make Anarki installable as a Racket package. Here's an example that should be working, but currently isn't:

  ; my-file.arc
  (= n 2)
  (= my-definition (* n n))
  
  
  arc>
    (= my-definition
      (let my-ns (nsobj)
        
        ; Populate the namespace with the current namespace's bindings.
        (each k (ns-keys current-ns)
          ; Racket has a variable called _ that raises an error when
          ; used as an expression, and it looks like an Arc variable, so
          ; we skip it. This is a hack. Maybe it's time to change how
          ; the Arc namespace works. On the other hand, copying
          ; namespaces in this naive way is prone to this kind of
          ; problem, so perhaps it's this technique that should be
          ; changed.
          (unless (is k '||)
            (= my-ns.k current-ns.k)))
        
        ; Load the file.
        (w/current-ns my-ns (load "my-file.arc"))
        
        ; Get the specific things you want out of the namespace.
        my-ns!my-definition))
  4
  arc> n
  _n: undefined;
   cannot reference an identifier before its definition
    in module: "/home/nia/mine/drive/repo/mine/prog/repo/not-mine/anarki/ac.rkt"
    context...:
     /home/nia/mine/drive/repo/mine/prog/repo/not-mine/anarki/ac.rkt:1269:4
The idea is, you create an empty Arc namespace with (nsobj), you use `w/current-ns` to load a file into it, and you use `a!b` or `a.b` syntax to manipulate individual entries.

An "Arc namespace" is just a convenience wrapper over a Racket namespace that automatically converts between Arc variables `foo` and their corresponding Racket variables `_foo`.

For some overall background...

I wrote ns.arc when I didn't have much idea what Racket namespaces or modules could do, but I was at least sure that changing the compiled Arc code to more seamlessly interact with Racket's `current-namespace` would open up ways to load Arc libraries without them clobbering each other. It wouldn't be perfect because of things like unhygienic macros, but it seemed like a step in the right direction.

I went a little overboard with the idea that Racket namespaces and Racket modules could be manipulated like Arc tables. However, that was the only clear vision I had when I embarked on writing the ns.arc library, so I approximated it as well as I could anyway. In fact, I don't think the utilities for generating first-class modules (like `simple-mod` and `make-modecule`) are all that useful, because as I understand a little better now, Racket modules are as complicated as they are mainly to support separate compilation, so generating them at run time doesn't make much sense.

I'm still finding out new things about what these can do, though. Something I didn't piece together until just now was that Racket has a Racket has a `current-module-name-resolver` parameter which can let you run arbitrary code in response to a top-level (require ...) form. I presume this would let you keep track of all the modules required this way so you can `namespace-attach-module` them to another namespace later. Using this, the kind of hackish partial-namespace-copying technique I illustrate above can probably be made into something pretty robust after all, as long as Anarki sets `current-module-name-resolver` to something specific and no other code ever changes it. :-p

-----

3 points by rocketnia 119 days ago | link

I tinkered with Anarki a whole bunch and finally got this working smoothly. There was a missing step, because it turns out we need to load certain Racket-side bindings into a namespace in order to be able to evaluate Arc code there. It seems more obvious in hindsight. :)

I approached this with the secondary goal of letting a Racket program (or a determined Arc program) instantiate multiple independent intances of Anarki. The ac.rkt module was the only place we were performing side effects when a Racket module was visited, and Racket's caching of modules makes it hard to repeat those side effects on demand, so I moved most of them into a procedure called `anarki-init`.

By adding one line to the example I gave...

  (= my-definition
    (let my-ns (nsobj)
      
      ; Load the Arc builtins into the namespace so we can evaluate
      ; code.
      (w/current-ns my-ns ($.anarki-init))

      ...))
...it becomes possible to evaluate Arc code in that namespace, and the example works.

I used issue #95 on GitHub to track this task, and I talk about it a little more there: https://github.com/arclanguage/anarki/issues/95

Before I started on that, I did a bunch of cleanup to get the Anarki unit tests and entrypoints running smoothly on all our CI platforms. To get started on this cleanup, I had a few questions hjek and akkartik were able to discuss with me on issue #94: https://github.com/arclanguage/anarki/issues/94

A lot of the problems I'm fixing here are ones I created, so it's a little embarrassing. :) It's nice to finally put in some of this missing work, though. I want to say thanks to shader and hjek for talking about modules and packages, provoking me to work on this stuff!

-----

3 points by akkartik 119 days ago | link

And thank you :) I'm glad you got something out of it, because the project's certainly better for it.

-----

3 points by shader 133 days ago | link

> lobste.rs

I was confused for a second or two, because the current top post on that site is almost identical to the one you commented on a year ago.

(What are you working on this week? by caius)

-----

2 points by hjek 133 days ago | link

> Forks that restrict where they get good ideas from will be outcompeted by forks that don't.

That is so true. LibreOffice and Gitea come to mind, but also what happened with io.js/nodejs.

-----


No I didn't try it..

-----

1 point by akkartik 160 days ago | link | parent | on: The cargo cult of versioning

I updated my original post based on conversations I had about it, and my updated recommendation is towards the bottom:

"Package managers should by default never upgrade dependencies past a major version."

I now understand the value of version pinning. But it seems strictly superior to minimize the places where you need pinning. Use it only when a library misbehaves, rather than always adding it just to protect against major version changes.

---

CI integrated with a package manager would indeed be interesting. Hmm, it may disincentivize people from writing tests though.

-----

2 points by shader 159 days ago | link

A reduced need for tests may not be a bad thing.

We don't write tests just to have tests, but to verify that certain important functionality is preserved across versions. The trouble with many tests, is that they are written to fill coverage quotas, and don't actually check important use cases. By using actual dependents and _their_ tests as a test for upstream libraries, we might actually get a better idea of what functionality is considered important or necessary.

Anything that nobody's using can change; anything that they rely on should not, even if it's counter intuitive.

The problem remains that most user code will still not exist in the package manager. It might be more effective if integrated with the source control services (github, gitlab, etc.), which already provide CI and host software even if it's not intended to be used as a dependency. The "smart package" system could then use the latest head that meets a specified testing requirement, instead of an arbitrary checkpoint chosen by the developers.

-----

2 points by akkartik 159 days ago | link

Oh, I just realized what you meant. Yes, I've often wanted an open-source app store of some sort where you had confidence you had found every user of a library. That would be the perfect place for CI if we could get it.

Perhaps you're also suggesting a package manager that is able to phone home and send back some sort of analytics about function calls and arguments they were called with? That's compelling as well, though there's the political question of whether people would be creeped out by it. I wonder if there's some way to give confidence by showing exactly what it's tracking, making all the data available to the world, etc. If we could convince users to enter such a bargain it would be a definite improvement.

-----

2 points by shader 157 days ago | link

I wasn't really considering that level of integration testing. It would certainly be cool to get detailed error reports from the CI tool. I don't see why you couldn't, since the code is published on the public package system, and you would be getting error listings anyway.

I don't think it would be creepy as long as it's not extracting runtime information from actual users. If it's just CI test results, it shouldn't matter.

Live user data would be a huge security risk. Your CI tests could send you passwords, etc., if they happen to pass through your code during an error case.

-----

2 points by akkartik 159 days ago | link

I wasn't saying there'd be a reduced need for tests. It's hard for me to see how adding CI would reduce the need for tests. I'm worried that people will say this stupid CI keeps failing, so "best practice" is to not write tests. (The way that package managers don't enforce major versions, so "best practice" has evolved to be always pinning.)

Unnecessary tests are an absolutely valid problem, but independent :)

-----

2 points by shader 157 days ago | link

By "reduced need for tests" I didn't mean that the absolute number of tests would decline, but rather the need and incentives for the development team to write the tests themselves. Since they have the ecosystem providing tests for them, they don't need to make as many themselves. At least, that's how I understood the discussion.

So yes, if the package manager only enforced the tests you include in your package it would incorrectly discourage including tests. But if it enforces tests that _other_ people provide, you have no way around it. The only problem is how to handle bad tests submitted by other people. Maybe only enforce tests that passed on a previous version but fail on the current candidate?

-----

1 point by akkartik 157 days ago | link

Ooh, that's another novel idea I hadn't considered. I don't know how I feel about others being able to add to my automated test suite, though. Would one of my users be able to delete tests that another wrote? If they only have create privileges, not update, how would they modify tests? Who has the modification privileges?

These are whole new vistas, and it seems fun to think through various scenarios.

-----

2 points by shader 154 days ago | link

It's not really the same as others being able to add tests to your automated suite. Rather, they add tests to their own package, and then the CI tool collects all tests indirectly dependent on your library into a virtual suite. Those tests are written to test their code, and only indirectly test yours. If a version of their package passes all of their tests with a previous version of your code, but the atomic change to the latest version of your code causes their test to fail, the failure was presumably caused by that change. The tests will probably have to be run multiple times to eliminate non-determinism.

It's still possible that someone writes code that depends on "features" that you consider to be bugs, or a pathologically sensitive test, so there may need to be some ability as the maintainer to flag tests as poor or unhelpful so they can be ignored in the future. Hopefully the requirement that the test pass the previous version to be considered is sufficient to cover most faulty tests though.

-----

1 point by akkartik 154 days ago | link

Yes, this was kinda what I had in mind when I talked about an open source app store. Make it easy for libraries to be aware of how they're used.

-----

2 points by shader 151 days ago | link

It doesn't look like it would be too hard to start building such a system on top of Github; they provide an api for seaching for repos by language.

We could potentially build it first for Arc, which would be pretty small and simple, but also provide the package manager we've always wanted :P

-----

1 point by akkartik 173 days ago | link | parent | on: How Rebol's macros differ from Lisp's

You're passing in a function to gt10, and the article does admit that that is doable everywhere:

"The trick is to change the unit of currency from passing source code to passing functions."

But try passing in '(pr msg) as a list to gt10.

-----

More