Arc Forumnew | comments | leaders | submit | shader's commentslogin
5 points by shader 98 days ago | link | parent | on: Yet another post regarding Arc & Heroku

I've recently discovered the Caddy server (https://caddyserver.com/), which makes SSL and application proxy deployments super easy. Like, 2 lines of config:

  domain.com               #uses the domain to automatically set up SSL with Let's Encrypt
  proxy / localhost:8080   #redirect everything to Arc on 8080
I will say that Arc runs a bit resource intensive, and the slightly slow boot times mean you don't want it to have to re-launch because of infrequent requests. I don't know how well it would work on Heroku.

Also, some VPS services like vultr.com offer $5/mo nodes that have more resources than what you get from Heroku at $7/mo anyway.

-----

4 points by shader 98 days ago | link

I should mention that if what you want from Heroku is their deployment process, you can actually replicate some of it pretty easily with Caddy (though I have not done so yet myself; I plan to soon...)

Specifically, they have support for automatically fetching code from git and running a command in the repo, either periodically or triggered by a webhook: https://caddyserver.com/docs/http.git

That doesn't get you the app ecosystem that heroku offers, but you can get a lot of that pretty easily via docker and docker-compose now.

-----

4 points by shader 108 days ago | link | parent | on: Next steps

> Unvote (like HN)

That would be nice I suppose, but we can't actually modify the arclanguage forum, so it would be of limited benefit. I guess a lot of people come to arc because they want to run an HN clone though, so it might be worth pursuing.

> File upload

Yeah, I think enhancing some of the fundamental web-service functionality in arc would be good. Beyond file upload, better support for OAuth, ssl, etc. would be good too.

> Ecommerce

Good idea. Not sure how to make that better myself, but demystifying and enabling easy ecommerce would be cool.

I think that also has the same motivation for the previous category; we need better support for web application development, because that's what most people want to do.

-----

3 points by hjek 107 days ago | link

It's true that unvote wouldn't make it to Arc Forum as it's never updated, but the same is true for any new feature, sadly.

Regarding SSL, I've been working on an example Nginx + LetsEncrypt secure reverse proxy configuration for Arc, https://github.com/arclanguage/anarki/blob/master/extras/new...

It's just insane that Arc Forum doesn't encrypt HTTP but, well, as you write, we can't change Arc Forum. Maybe they'll update it some day, or someone will set up an unofficial one that's more up to date?

Regarding web application development, there's already an interesting web based app development interface, https://github.com/arclanguage/anarki/blob/master/lib/prompt...

It looks quite humble but it has both a repl and an interface for saving/running various web apps.

I've set it to be enabled by default in Anarki. And it's only available for admins, because it can run system commands. I wonder if it would be possible to allow any user to develop apps securely, e.g. by disabling unsafe commands or sandboxing it somehow? Perhaps there is a simple way of doing that? It could open Arc web app development up to any user of an Anarki driven page.

I think those features mentioned above still need better documentation, too.

Other HN features that Anarki News is lacking:

* favourites

* hide item

* past front pages

* show all stories from same site by clicking on domain name

-----

3 points by shader 111 days ago | link | parent | on: Next steps

The second most popular idea is probably a module system. I honestly don't know much about what that would entail, beyond reading rocketnia's comments in his ns.arc from 7 years(!) ago.

Basically, it seems kinda hard, because of how much arc currently depends on global tables to collect special objects like macros, the interactive help stuff, etc.

We could just decide that Arc is deliberately focused on 'hackability', like emacs, and that modules are unnecessarily refined and restrictive for our uses.

-----

5 points by shader 111 days ago | link | parent | on: Next steps

My first thought is a package system based on melpa / use-package from emacs. Please point out if something like this has already been done and I have just forgotten it.

Basically, just a simple index and package fetching system that pulls libraries directly from github or other vcs sources. Then we only need one file in the standard library "package.arc", that provides functions for querying the index and fetching packages from github into a ~/.arc directory, supported by a macro like use-package from emacs (https://github.com/jwiegley/use-package) or 'ns from clojure.

An important point here is that a package fetching utility can be independent of any module system. Which is good, because we don't really have that yet. The emacs community doesn't seem to think one is necessary; everything is just imported into the global namespace, and prefixed with the package name if necessary to keep it separate. Maybe we could make some macros to simplify the prefix process, but that could quickly get complicated.

We could also experiment with some avant garde packaging ideas, such as akkartik's thoughts on avoiding version pinning, searching the vcs sources directly for the package, or building the community CI tools that automatically find downstream dependencies and run their tests against your lib changes.

-----

3 points by zck 108 days ago | link

> My first thought is a package system based on melpa / use-package from emacs.

That would be great! Use-package is amazing, and that (combined with similar loading things from package.el) could make loading dependencies way easier. I know that when I'm writing arc code, I'm reluctant to use libraries -- even libraries included with arc -- because they're (afaik) impossible to automatically load.

> We could also experiment with some avant garde packaging ideas, such as akkartik's thoughts on avoiding version pinning...

Amusingly, the Emacs ecosystem can be thought of as a package manager that avoids version pinning; the standard package repositories (GNU Elpa, Melpa) only keep the latest version of a package around; you can't install older versions.

Of course, I know of no package foo.el that introduced _foo2.el_ when breaking backwards compatibility. So we can maybe do better that way.

-----

3 points by shader 111 days ago | link

I'd be willing to donate some hosting for an initial version of the package index, if we decide we need one. I have a vultr instance I'm not using for much at the moment.

-----


It shouldn't be too hard if we provide some real reasons for people to use the language. A 'killer app' if you will.

That and production-readiness, so that they don't have to stop using their favorite 'toy' when they want to build something real. Some of that involves just making more libraries available.

Maybe starting with the package-management / CI system that pulls directly from github akkartik and I were discussing earlier. It would be cool, not too hard to implement, and make sharing libraries a lot easier.

-----

4 points by shader 113 days ago | link

This is where a more traditional forum system would be beneficial. I know we've done lots of work on module / package systems in the past, but it's not easy to do research on it. Maybe we could use github issues on anarki to discuss some of these topics?

I did find lib/ns.arc, which rocketnia had done apparently back in 2011...

Modules and packages are related, but can be implemented independently. The package system is just for discovering and fetching code. Modules keep it separate and help avoid mistakes caused by interface overlap. Honestly, we can probably do without modules if we're careful - for example, emacs doesn't have namespaces, just a prefix convention.

Not having modules does make arc seem more like a toy than a production system though. You're more at the mercy of the remote developers.

-----

4 points by hjek 111 days ago | link

Personally I find that the 'killer feature' is Racket integration as Racket has quite an extensive well-documented standard library (and FFI so that's all C libraries as well).

Even if Arc doesn't get much attention itself, it still benefits from the work going into Racket.

So there are a lot of libraries available. (Arc's Racket integration still has a few rough edges, particularly regarding keyword arguments, but there are workarounds.)

One interesting library is this video editing DSL, http://docs.racket-lang.org/video

-----

2 points by shader 111 days ago | link

That doesn't really qualify as a 'killer feature' for Arc; it doesn't provide any incentive for someone to learn and use Arc - and keep using it - over just learning Racket in the first place.

I do agree that it dramatically improves the value of Arc over being written as a stand-alone interpreter. And providing easier access to Racket features may be a good idea. It helps bootstrap Arc, like Clojure on the JVM with the Java standard libraries until proper clj wrappers are written.

-----

3 points by hjek 109 days ago | link

I do know some Racket, and for me this exact feature did provide an incentive for me to learn and use Arc, because I could apply what I already knew.

Arc feels more brief and less strict, so in some cases it's more pleasant to use than Racket.

When I found out about Arc's Racket interop a couple of years ago, I was really surprised that this was not documented anywhere, so I added a section in the Readme.

-----

3 points by shader 108 days ago | link

I agree it's important, and we should probably develop it and make it easier because it's an important foundation. However, there aren't that many Racket developers, and its not really the best argument in favor of using a language that it just happens to be a better version of another language. That might be a competitive advantage against Racket, but what about compared to other interpreted languages? Why would someone want to use arc over Python or Ruby?

> When I found out about Arc's Racket interop a couple of years ago, I was really surprised that this was not documented anywhere, so I added a section in the Readme.

Yeah, the documentation could use some work. Definitely not our competitive advantage at the moment, though that could be changed if we build on our interactive help system.

Also, that feature is really only part of anarki. I don't think arc3.1 had that feature. I think at this point it seems like pg has pretty well abandoned arc though, so maintaining compatibility shouldn't be a primary concern.

-----

2 points by akkartik 108 days ago | link

Hmm, I don't think pg has abandoned Arc. And I don't think maintaining compatibility is a concern even so. So far Arc has made no compatibility guarantees. We're all forking pre-alpha software. So it's conceivable the hundred-year language will have features from Anarki. Forks that restrict where they get good ideas from will be outcompeted by forks that don't.

Though lately I consider it more likely that a hundred-year language will have a leveled-up conception of "compatiblity", one that assumes orders of magnitude more forking activity. (See "the Zen of Mu" at the bottom of http://akkartik.github.io/mu; Mu is my attempt at building out the foundations for a hundred-year stack. With, perchance, something like Arc on top.) Perhaps Arc shouldn't be a single language but a family of forks (https://lobste.rs/s/n0d3qo#c_rue8pf). Not a single tree but a clonal colony (https://en.wikipedia.org/wiki/List_of_longest-living_organis...). That'll only work if we can make it easy for superficially-incompatible forks to exchange features and functionality back and forth. Which is an unsolved problem so far. So we may well be very far from a hundred-year language.

Anyways, this tangent is my contribution. I don't have a short-term answer for how to solve Arc's users-vs-libraries chicken-and-egg problem ^_^

-----

4 points by shader 107 days ago | link

> I don't have a short-term answer for how to solve Arc's users-vs-libraries chicken-and-egg problem ^_^

Be the change you want to see in the world... I'm not really sure how to motivate the community, but I am rather attached to it, even if I have been a very infrequent lurker over the past years. We have had a relatively high amount of discussion over the past few days though...

I don't think we can just expect to flip a switch and suddenly get a community; we have to /be/ a community, and then people might be willing to join us.

-----

3 points by shader 107 days ago | link

Maybe we should emphasize the flexibility of our language designs by making the language itself more modular. It would be challenging from a compatibility/dependency standpoint, but those are problems we might have to solve anyway. It would help to have better isolation of components.

-----

2 points by hjek 107 days ago | link

I agree.

Some newbie friendly documentation to ns.arc would be great, or perhaps some very simple examples. Have you tried using ns.arc?

https://github.com/arclanguage/anarki/blob/master/lib/ns.arc

-----

2 points by rocketnia 103 days ago | link

It looks like I might've subtly broken ns.arc with my own changes to make Anarki installable as a Racket package. Here's an example that should be working, but currently isn't:

  ; my-file.arc
  (= n 2)
  (= my-definition (* n n))
  
  
  arc>
    (= my-definition
      (let my-ns (nsobj)
        
        ; Populate the namespace with the current namespace's bindings.
        (each k (ns-keys current-ns)
          ; Racket has a variable called _ that raises an error when
          ; used as an expression, and it looks like an Arc variable, so
          ; we skip it. This is a hack. Maybe it's time to change how
          ; the Arc namespace works. On the other hand, copying
          ; namespaces in this naive way is prone to this kind of
          ; problem, so perhaps it's this technique that should be
          ; changed.
          (unless (is k '||)
            (= my-ns.k current-ns.k)))
        
        ; Load the file.
        (w/current-ns my-ns (load "my-file.arc"))
        
        ; Get the specific things you want out of the namespace.
        my-ns!my-definition))
  4
  arc> n
  _n: undefined;
   cannot reference an identifier before its definition
    in module: "/home/nia/mine/drive/repo/mine/prog/repo/not-mine/anarki/ac.rkt"
    context...:
     /home/nia/mine/drive/repo/mine/prog/repo/not-mine/anarki/ac.rkt:1269:4
The idea is, you create an empty Arc namespace with (nsobj), you use `w/current-ns` to load a file into it, and you use `a!b` or `a.b` syntax to manipulate individual entries.

An "Arc namespace" is just a convenience wrapper over a Racket namespace that automatically converts between Arc variables `foo` and their corresponding Racket variables `_foo`.

For some overall background...

I wrote ns.arc when I didn't have much idea what Racket namespaces or modules could do, but I was at least sure that changing the compiled Arc code to more seamlessly interact with Racket's `current-namespace` would open up ways to load Arc libraries without them clobbering each other. It wouldn't be perfect because of things like unhygienic macros, but it seemed like a step in the right direction.

I went a little overboard with the idea that Racket namespaces and Racket modules could be manipulated like Arc tables. However, that was the only clear vision I had when I embarked on writing the ns.arc library, so I approximated it as well as I could anyway. In fact, I don't think the utilities for generating first-class modules (like `simple-mod` and `make-modecule`) are all that useful, because as I understand a little better now, Racket modules are as complicated as they are mainly to support separate compilation, so generating them at run time doesn't make much sense.

I'm still finding out new things about what these can do, though. Something I didn't piece together until just now was that Racket has a Racket has a `current-module-name-resolver` parameter which can let you run arbitrary code in response to a top-level (require ...) form. I presume this would let you keep track of all the modules required this way so you can `namespace-attach-module` them to another namespace later. Using this, the kind of hackish partial-namespace-copying technique I illustrate above can probably be made into something pretty robust after all, as long as Anarki sets `current-module-name-resolver` to something specific and no other code ever changes it. :-p

-----

3 points by rocketnia 93 days ago | link

I tinkered with Anarki a whole bunch and finally got this working smoothly. There was a missing step, because it turns out we need to load certain Racket-side bindings into a namespace in order to be able to evaluate Arc code there. It seems more obvious in hindsight. :)

I approached this with the secondary goal of letting a Racket program (or a determined Arc program) instantiate multiple independent intances of Anarki. The ac.rkt module was the only place we were performing side effects when a Racket module was visited, and Racket's caching of modules makes it hard to repeat those side effects on demand, so I moved most of them into a procedure called `anarki-init`.

By adding one line to the example I gave...

  (= my-definition
    (let my-ns (nsobj)
      
      ; Load the Arc builtins into the namespace so we can evaluate
      ; code.
      (w/current-ns my-ns ($.anarki-init))

      ...))
...it becomes possible to evaluate Arc code in that namespace, and the example works.

I used issue #95 on GitHub to track this task, and I talk about it a little more there: https://github.com/arclanguage/anarki/issues/95

Before I started on that, I did a bunch of cleanup to get the Anarki unit tests and entrypoints running smoothly on all our CI platforms. To get started on this cleanup, I had a few questions hjek and akkartik were able to discuss with me on issue #94: https://github.com/arclanguage/anarki/issues/94

A lot of the problems I'm fixing here are ones I created, so it's a little embarrassing. :) It's nice to finally put in some of this missing work, though. I want to say thanks to shader and hjek for talking about modules and packages, provoking me to work on this stuff!

-----

3 points by akkartik 93 days ago | link

And thank you :) I'm glad you got something out of it, because the project's certainly better for it.

-----

3 points by shader 107 days ago | link

> lobste.rs

I was confused for a second or two, because the current top post on that site is almost identical to the one you commented on a year ago.

(What are you working on this week? by caius)

-----

2 points by hjek 107 days ago | link

> Forks that restrict where they get good ideas from will be outcompeted by forks that don't.

That is so true. LibreOffice and Gitea come to mind, but also what happened with io.js/nodejs.

-----

4 points by i4cu 109 days ago | link

I think it is a killer feature in a round-about way. I believe the killer feature is actually generic and robust database integration. There's a whole lot of people who start with data on hand and need to hack together an app from there. The starting point to db integration is racket, which also gets you access to other libraries. Once in place arc libs can be created eliminating a need to even learn racket (just like there are plenty of clojure programmers who don't know java).

IMO doing package integration first is putting the cart before the horse.

-----

3 points by shader 108 days ago | link

> the killer feature is actually generic and robust database integration

That could be a killer feature, but I don't think we have it developed yet. I certainly wouldn't have thought of it. Basically, when I say "killer feature", I'm thinking of the specialization or distinguishing characteristic that we would emphasize in a Quick Start tutorial.

When arc was first launched, the "arc challenge" of building a multi-action website with a form in only ~5 lines of code was the "special feature." Right now, I think hackability and simplicity of the syntax are two of the better things, but we could probably specialize on more.

> IMO doing package integration first is putting the cart before the horse

The purpose for working on package integration is to enable further development. Without the ability to share code easily, it's a lot harder to build on and benefit from community effort. In itself, I agree, a package manager is boring and probably not a killer feature. However, making it really easy to start building something useful by searching for and loading relevant code straight from the interactive console would be a pretty big step.

Perhaps the specialty I'm looking for is exploratory programming, which we've mentioned before. Our interactive help system is pretty good. The only problem others have mentioned before was that Python is arguably better, just because there are already examples and libraries for doing most activities, whereas Arc requires a lot of development effort just to get fundamental components working.

-----

2 points by hjek 107 days ago | link

> Without the ability to share code easily, it's a lot harder to build on and benefit from community effort.

I've just been dumping my Arc experiments into the Anarki repository. Akkartik manages it by the policy that pretty much anyone can commit anything, so I haven't had any issues sharing my code. I've recently put a Puredata compiler in Arc in there, https://github.com/arclanguage/anarki/blob/master/lib/pd.arc

That said, there is an Arc package manager, Hackinator by awwx. http://awwx.ws/hackinator

Perhaps that is worth a look?

The interactive help in Anarki is great, although there are some undocumented functions. A great improvement, which would be very easy to implement, would be to add the documentation from the various Arc sites to the Anarki interactive help.

-----

3 points by i4cu 108 days ago | link

Well there's that, but honestly... exploratory programming with a tool that can't provide basic functionality will limit how much you can explore. I think you mean 'language design exploration'? - if so I'll stop right there, since, well, frankly... it's not my wheelhouse. :)

> package integration first is putting the cart before the horse

I'm simply saying that features like db integration bring users, users bring manpower, manpower will provide package management in a way that ensures it accounts for more use cases, but again this is moot if what you're interested in is only the language development arena. Though I'm not sure how you could prove out a language unless you can actually use it for real world work.

P.S. If you want users, then killer feature would = todo list mobile app with local db in 30 lines of code!

Cheers.

-----

3 points by shader 107 days ago | link

> exploratory programming with a tool that can't provide basic functionality will limit how much you can explore

Absolutely true, which is why I do think the Racket integration is important, so we can just wrap its many and powerful libraries in lighter-weight arc libs, and also why I think a decent package system is important. It needs to be easy to share code, and to find solutions to problems. Otherwise everyone spends too much time rebuilding the same wheels.

> features like db integration bring users

Absolutely. I think better db support in arc would be awesome to have. Especially if we can build an ecosystem where "intelligent defaults" are the norm, so that 90% of the time the exploratory developer can get a db up and running with 1-2 lines.

> todo list mobile app with local db in 30 lines of code

An admirable goal. That's actually a great idea of something we could work toward as a community project, each building pieces that eventually come together to make a decent modern app.

Were you thinking a mobile friendly web app, or trying to run it natively on a phone? I'm not really sure how the latter would work... Though building a compatible javascript base for arc would be pretty nice. I do like being able to use consistent code for both clients and servers, as in node and clojure.

-----

2 points by i4cu 107 days ago | link

> Were you thinking a mobile friendly web app, or trying to run it natively on a phone?

The java/js ecosystem has the largest reach making it the easy choice. One could work on a js compiler and target pouchDB as a starting point. That said choosing js also makes Arc go further down the path Clojure has already gone putting the two closer together, and with Clojure being so far ahead in that arena then maybe it's doomed to fail. The other way to go is to do what Clojure is not great at. iOS development? maybe integration with swift? At any rate I'm not a language development guy. I can only tell you what would appeal to me. I mostly use Clojure and a really easy to use arc->iOS app development ecosystem would be really cool.

-----

2 points by hjek 109 days ago | link

> package-management / CI system that pulls directly from github

How would this be different than git submodules?

https://github.com/blog/2104-working-with-submodules

-----

2 points by shader 108 days ago | link

> How would this be different than git submodules?

Most of the difference would be in the user interface. Git submodules might actually be a great idea for how to implement it, to encourage hacking on and submitting changes to the libraries themselves.

Git submodules don't provide a very friendly user experience though. You have to know the repository url and manage them separately. It's not terrible, but it's extra work, and doesn't handle loading the library in your own, or dependency management. Ideally, pulling in a package would also result in pulling in all of its dependencies, without pulling in multiple copies of the same library the way npm used to.

There are a lot more features that could be offered by an actual package system.

-----

4 points by shader 119 days ago | link | parent | on: Arc History: use of exclamation ssyntax

I'm having a hard time finding any documentation on ssyntax either. Must have been in some early forum posts... But your understanding is the same as mine.

For a little more background, the "." symbol is traditional Lisp syntax for a single cons cell. So (1 . 2) is a cons cell with car containing 1 and cdr containing 2. This is in contrast to the list '(1 2), which expands to two cons cells: (1 . (2 . nil)).

Cons cell notation is often used for pairs, such as k/v pairs in a hash table, since you don't need the ability to append additional content.

pg added "a.b" as "symbol syntax" to arc to provide shorthand for (a b), which is a very common pattern - calling a single argument function, looking something up in a table, etc. Furthermore, it chains, so "a.b.c" expands to ((a b) c) - the pattern you would want if you were going to look something up in a nested set of objects.

And as zck points out, symbols (quoted names) are very common keys for object, since they are perfect for literal names. In fact, that's precisely how templates are used to make something analogous to classes in arc.

https://arclanguage.github.io/ref/template.html

(and yes, you two probably know this, but hopefully it's useful for somebody)

I'll keep looking for explanations of ssyntax though.

-----


What are the strengths of Parrot as a platform compared to other VMs?

-----

3 points by shader 125 days ago | link

As a result of my research trying to answer this question, I now think it would be interesting to use PyPy to implement arc, if nobody has done so yet.

-----

2 points by shader 134 days ago | link | parent | on: The cargo cult of versioning

Interesting discussion. I do appreciate your observations that only the major version matters, and that could be equivalent to renaming the project, since it doesn't really communicate anything useful other than "differences exist".

However, I don't really agree that preventing version pinning will somehow encourage better coding standards. Hypothetically it makes sense; without version pinning, breaking changes would be more noticeable and painful, and users would have more incentives to switch to other libraries that don't do that. Unfortunately, software is not yet and possibly never will be the kind of competitive market required for that to be effective.

The problem is that the pain is entirely felt by the users; the developers may not even realize when one of their changes is "breaking", because they aren't running the users' code. Furthermore, since libraries are usually very different even from those with similar functionality, updating to accommodate the breaking changes will almost always be easier than switching to a different one with a better reputation (assuming it even exists). Since the user can't control the behavior of the developer, the best they can do to overcome breaking changes on their end is version pinning. It's not a good solution, but it's effectively the only solution.

Maybe in the long run people would read horror stories and avoid those libraries, but that still doesn't help much, because the developers usually don't have much incentive to actually meet the needs of their users--they are not really customers, merely beneficiaries. This would be different for commercial software, but then you won't be getting the resource through a package manager anyway.

One interesting solution would be a package manager with integrated CI functionality, so any minor update to a package is automatically tested against every other package that has it as a dependency. This wouldn't catch all of the errors, since most code wouldn't be published, but it would make the developers much more aware of and sensitive to breaking changes. If they still want to make the change anyway, they can change the name.

-----

3 points by shader 128 days ago | link

I've been thinking more about the "renaming" vs "numbered versions" for packages, and I'm now leaning more strongly towards a distinct version number—or at least a separate version field, number or otherwise.

It's true that the major version number doesn't convey much, except the vague idea that the authors believe it to be better, or they wouldn't have written it. However, as long as they are unique and you have some easy way to find the "latest" version, it doesn't really matter if they're numbers or not. Commit hashes or code names should work just as well.

However, I think there's a simple security argument in favor of making the version a subfield, to prevent spoofing by malicious or mischievous third parties. Otherwise anyone could claim that their fork was "python-4".

An alternate solution might be to add a "successor" field, so that a package could identify another package as the rightful heir, even if it wasn't developed by the same team. That should make the open-source fork-based community development a little easier. You'd still have to know what the root package was to follow the chain though.

-----

1 point by akkartik 134 days ago | link

I updated my original post based on conversations I had about it, and my updated recommendation is towards the bottom:

"Package managers should by default never upgrade dependencies past a major version."

I now understand the value of version pinning. But it seems strictly superior to minimize the places where you need pinning. Use it only when a library misbehaves, rather than always adding it just to protect against major version changes.

---

CI integrated with a package manager would indeed be interesting. Hmm, it may disincentivize people from writing tests though.

-----

2 points by shader 133 days ago | link

A reduced need for tests may not be a bad thing.

We don't write tests just to have tests, but to verify that certain important functionality is preserved across versions. The trouble with many tests, is that they are written to fill coverage quotas, and don't actually check important use cases. By using actual dependents and _their_ tests as a test for upstream libraries, we might actually get a better idea of what functionality is considered important or necessary.

Anything that nobody's using can change; anything that they rely on should not, even if it's counter intuitive.

The problem remains that most user code will still not exist in the package manager. It might be more effective if integrated with the source control services (github, gitlab, etc.), which already provide CI and host software even if it's not intended to be used as a dependency. The "smart package" system could then use the latest head that meets a specified testing requirement, instead of an arbitrary checkpoint chosen by the developers.

-----

2 points by akkartik 133 days ago | link

Oh, I just realized what you meant. Yes, I've often wanted an open-source app store of some sort where you had confidence you had found every user of a library. That would be the perfect place for CI if we could get it.

Perhaps you're also suggesting a package manager that is able to phone home and send back some sort of analytics about function calls and arguments they were called with? That's compelling as well, though there's the political question of whether people would be creeped out by it. I wonder if there's some way to give confidence by showing exactly what it's tracking, making all the data available to the world, etc. If we could convince users to enter such a bargain it would be a definite improvement.

-----

2 points by shader 131 days ago | link

I wasn't really considering that level of integration testing. It would certainly be cool to get detailed error reports from the CI tool. I don't see why you couldn't, since the code is published on the public package system, and you would be getting error listings anyway.

I don't think it would be creepy as long as it's not extracting runtime information from actual users. If it's just CI test results, it shouldn't matter.

Live user data would be a huge security risk. Your CI tests could send you passwords, etc., if they happen to pass through your code during an error case.

-----

2 points by akkartik 133 days ago | link

I wasn't saying there'd be a reduced need for tests. It's hard for me to see how adding CI would reduce the need for tests. I'm worried that people will say this stupid CI keeps failing, so "best practice" is to not write tests. (The way that package managers don't enforce major versions, so "best practice" has evolved to be always pinning.)

Unnecessary tests are an absolutely valid problem, but independent :)

-----

2 points by shader 131 days ago | link

By "reduced need for tests" I didn't mean that the absolute number of tests would decline, but rather the need and incentives for the development team to write the tests themselves. Since they have the ecosystem providing tests for them, they don't need to make as many themselves. At least, that's how I understood the discussion.

So yes, if the package manager only enforced the tests you include in your package it would incorrectly discourage including tests. But if it enforces tests that _other_ people provide, you have no way around it. The only problem is how to handle bad tests submitted by other people. Maybe only enforce tests that passed on a previous version but fail on the current candidate?

-----

1 point by akkartik 131 days ago | link

Ooh, that's another novel idea I hadn't considered. I don't know how I feel about others being able to add to my automated test suite, though. Would one of my users be able to delete tests that another wrote? If they only have create privileges, not update, how would they modify tests? Who has the modification privileges?

These are whole new vistas, and it seems fun to think through various scenarios.

-----

2 points by shader 128 days ago | link

It's not really the same as others being able to add tests to your automated suite. Rather, they add tests to their own package, and then the CI tool collects all tests indirectly dependent on your library into a virtual suite. Those tests are written to test their code, and only indirectly test yours. If a version of their package passes all of their tests with a previous version of your code, but the atomic change to the latest version of your code causes their test to fail, the failure was presumably caused by that change. The tests will probably have to be run multiple times to eliminate non-determinism.

It's still possible that someone writes code that depends on "features" that you consider to be bugs, or a pathologically sensitive test, so there may need to be some ability as the maintainer to flag tests as poor or unhelpful so they can be ignored in the future. Hopefully the requirement that the test pass the previous version to be considered is sufficient to cover most faulty tests though.

-----

1 point by akkartik 128 days ago | link

Yes, this was kinda what I had in mind when I talked about an open source app store. Make it easy for libraries to be aware of how they're used.

-----

2 points by shader 125 days ago | link

It doesn't look like it would be too hard to start building such a system on top of Github; they provide an api for seaching for repos by language.

We could potentially build it first for Arc, which would be pretty small and simple, but also provide the package manager we've always wanted :P

-----


I'm sure many of you have already seen this (it's from 2012...), but it seems relevant to the discussion we were having about the future of programming.

He makes the great point that we should be enabling creation, and tightening the feedback loop between thought and product. I've been mostly focusing on better was to represent thoughts and communicate them to the computer, but this draws attention to the purpose of programming itself.

-----

3 points by akkartik 238 days ago | link

I was just thinking about it yesterday, after reading https://www.theatlantic.com/technology/archive/2017/09/savin.... Very inspiring talk.

-----

2 points by shader 232 days ago | link

That's precisely where I found out about it :D

-----

2 points by breck 235 days ago | link

Such a great talk, like his others.

I think it would be great to live in a world where not only could you use your finger to create a sprite animation, but if curious, you could also more easily delve into all the black boxes that make that experience happen (down to the physical level).

I like the NOMODES license plate. If you all had to pick a license plate to describe your work, what would it be? I might go with NOPARENS or NOSYNTAX.

-----

2 points by akkartik 235 days ago | link

Mine would be COPYMORE. Or NODEPS.

I think I share your vision: http://arclanguage.org/item?id=17277

-----

2 points by shader 232 days ago | link

Do you have any references for those terms? Or a short explanation for them?

-----

1 point by akkartik 232 days ago | link

COPYMORE

I'm what I like to call a 'copyista': I think DRY is overrated, and abstraction is overrated, and people are too quick to create abstractions to compress code rather than for conceptual clarity. Some links: http://www.sandimetz.com/blog/2016/1/20/the-wrong-abstractio...; http://programmingisterrible.com/post/139222674273/write-cod...; http://bravenewgeek.com/abstraction-considered-harmful; http://akkartik.name/post/modularity; http://dimitri-on-software-development.blogspot.de/2009/12/d...; http://thereignn.ghost.io/on-dry-and-the-cost-of-wrongful-ab...; http://akkartik.name/post/habitability. You don't have to read them all, but hopefully this gives you as much flavor as you want :)

NODEPS

This is short for "no dependencies". I think a lot of software's ills stem from people's short-sighted tendency to promiscuously add dependencies. In fact, our fundamental metaphor of libraries is wrong. Adding a library to your program isn't like plugging a new block into your Lego set. It's like hiring a new plumber. You're not just adding a few lines of code to a file somewhere, you're creating a relationship. Everytime I see someone talk about "code smells", I wait to see if they'll bring up having too many dependencies. Usually they don't, and I tune them out. And the solution is easy. When you find a library that does something useful, consider copying it into your project. That insulates you from breaking changes upstream, and frees up upstream to try incompatible changes. As a further salubrious effect, it encourages you to hack on the library and tune it to your purposes. (Without giving up the options of merging further changes from them, or submitting patches upstream.)

As it happens, this worldview of mine was actually catalyzed by conversations here in the Arc Forum, most proximally http://www.arclanguage.org/item?id=13263. That thread led to me writing http://akkartik.name/post/libraries and (a little clearer) http://akkartik.name/post/libraries2.

I consider an example of exemplary library use to be how I copied the termbox library into Mu (https://github.com/akkartik/mu/commit/5f1285238b), periodically merged commits from upstream (https://github.com/akkartik/mu/commit/9ba313ab7f), gradually cleaned it up to fit better with my project (https://github.com/akkartik/mu/commit/9a31c34f0f), and gradually stripped out code from it that Mu does not require (https://github.com/akkartik/mu/commit/c04baba4f2; https://github.com/akkartik/mu/commit/8e7827dfcf; https://github.com/akkartik/mu/commit/547ec78bf2). In the process I made some wrong turns, deleting features that I later decided I wanted (https://github.com/akkartik/mu/commit/10a3b8cca2) and created bugs for myself (https://github.com/akkartik/mu/commit/3315a7d3bb; https://github.com/akkartik/mu/commit/0c0d1ea5cd). But when it did things I didn't want, I was now empowered to change them (https://github.com/akkartik/mu/commit/ee1a18f050). One of my patches was useful upstream, so I submitted it: https://github.com/nsf/termbox/commit/0730826a07. I would be in no position to submit that patch if I hadn't taken the trouble to understand termbox's internals. That's another benefit of copying and privately forking libraries: it makes you a better citizen of the open source world, because open source depends on eyeballs, and using a library blindly helps nobody except your (extremely short-term) self.

More broadly, Mu is suffused with this ethos. My goal is that if you have a supported platform you should be able to run it with three commands:

  $ git clone https://github.com/akkartik/mu
  $ cd mu
  $ ./mu
(That highlights another benefit: your software becomes easier for others to try out. Without giving out binaries, because what's the point of being open-source if you do that?)

Mu's also geared to spread this idea. I want to build an entire software stack in which any part is comprehensible to any programmer with an afternoon to spare (http://akkartik.name/about). Which requires having as little code as possible, because every new dependency is a source of complexity if you're building for readers rather than users. In chasing this goal I'm very inspired by OpenBSD for this purpose. It's the only OS I know that allows me to recompile the entire kernel and userland in 2 commands (https://github.com/akkartik/mu/wiki/Building-OpenBSD-on-Open...). People should be doing this more often! I think I'm going to give up Mu and build my next project atop OpenBSD. But that's been slow going.

---

Ah, here's an old HN thread where I managed to combine both these ideas: https://news.ycombinator.com/item?id=11158357#11189308

I'd have preferred to more directly call out my hatred for compatibility constraints, but I couldn't figure out how to fit it on a license plate :)

-----

2 points by breck 219 days ago | link

The Pike maxim "A little copying is better than a little dependency" comes to mind. I think the overhead of dependencies is underrated ("it's just a 1 line import statement!"), and often a little repetition is a good thing.

-----

3 points by shader 252 days ago | link | parent | on: 3-Dimensional Source Code

> ... trading of programming theories ...

Clear and simple syntax / representation is important; combined with matching editing tools it enables us to communicate ideas easily and fluently.

I also like the idea of well defined input spaces. Many theorems or algorithms only work under certain conditions, and much damage has been done by applying them outside of their intended domains. But I think that's only part of the problem.

My own theory is that programs are specifications, and the more clearly and precisely they specify the better. Programs can fit into a matrix of good/bad ideas and good/bad specifications. Of these, two kinds are interesting bugs:

  1) Incorrectly specified good ideas
  2) Correctly specified bad ideas
Well specified good ideas are correct programs, and incorrectly specified bad ideas are just hopelessly confused.

Improving the languages and tools will never fix bad ideas, but they can make them more obvious. Now the goal is to make programming as close as possible to 'saying what you mean'. In other words, making the semantics as explicit as possible.

Basically my goal is 'declarative programming', which turns out to be a very vague concept to most people. They all agree that it's better, but nobody seems to have a good explanation for why. I think the difference is that declarative programs specify the only the relationships which are important, leaving the rest up to the platform to optimize or interpret as it sees fit. This leads to powerful and concise languages such as SQL, but at the cost of placing the burden on the platform rather than the programmer. Good for communication and clarity, bad for development and adoption.

Basically, declarative languages can be more concise because they rely more on shared knowledge; predefined vocabulary. If the language doesn't already have a way to express the concept you want, however, it is much more work to add. Imperative / procedural programs are more flexible because they rely on implicit semantics. You just tell the computer what to do—you don't have to explain what it is doing or why. Everything the program "accomplishes" is imaginary and external to the specification. This leaves very little room for the computer to optimize your selection of operations, and leaves a lot of room for you to accidentally provide an incorrect sequence of steps.

It's like the difference between giving directions by saying "Go to the grocery store at 5th and Main" vs. "Take a left, go three blocks, take a right, go two more blocks, park on the right side of the street and enter the blue building." The first is much clearer, but places much higher expectations on the navigation abilities of the recipient, while the second can be followed by anyone even though they have no idea where they're going - and mistakes are correspondingly harder to notice.

Sadly, the nature of declarative languages makes them fairly domain specific, which may explain part of why they're so rare and hard to make. Creating a declarative language for solving a class of problems is much harder than solving a single problem imperatively; you actually have to think of how and why you're solving those problems. But I think we could probably create some general patterns and guidelines for defining them, and maybe even start building up some tools to reduce the effort required.

-----

More