Arc Forumnew | comments | leaders | submitlogin
2 points by thaddeus 4050 days ago | link | parent

I suggest starting with Anarki: https://github.com/arclanguage/anarki if you haven't already. Noting the readme instructions also point to racket rather than mzscheme.

Arc 3.1 with either mzscheme 372+ or any version of racket will address the mutable pairs issue (http://arclanguage.org/item?id=10254).



2 points by lark 4050 days ago | link

I tried Anarki before but I recall at the time it wasn't backwards compatible with Arc. There was something about saving and reading tables that required specifying their datatype that differed from Arc. Has this changed?

I would like to use a version that also supports file uploads. Anarki is the only version I know of that supports them. Do other Arc implementations support file uploads?

-----

2 points by thaddeus 4050 days ago | link

Ahh, I think you're referring to the template issue: http://arclanguage.org/item?id=16171.

I don't see how that issue would stop you from using Anarki. And it appears news.arc was modified to accommodate the change.

I have to say, I don't like that change in making templates into first class objects. It would have been better to add something like 'inst-with' to extend inst.

If that change is a show stopper for you, then I would just fork anarki and eliminate the template changes, or fork Arc 3.1 and add the file upload change.

Also, can you provide some insight as to what you're doing? Are you just creating a HNish clone with file uploads for added content? As I mentioned in the other post, I wouldn't use that stuff for data storage anyway. So maybe consider making more dramatic changes and use a db?

-----

2 points by lark 4049 days ago | link

Yes, it was the template issue. It wouldn't stop me from using Anarki but it would stop be from using some other Arc implementation if it became necessary, because I would have to modify the application.

Forking Anarki also means maintaining the fork, which means one more thing I have to worry about and probably break. I'd rather only add what I need: the ability to have file uploads to Arc 3.1. Uploading files through HTTP is needed regardless of the storage behind the application (filesystem vs database).

On a database, is there a sample Arc program that writes to Postgres?

-----

2 points by thaddeus 4049 days ago | link

Can't say I've seen a sample program for Postgres, but I would expect it to be fairly straight forward to use any database supporting HTTP as a protocol. And, really, almost any DB would be an upgrade compared to storing each record in it's own flat file - correct?

MongoDB: http://docs.mongodb.org/ecosystem/tools/http-interfaces/

Datomic: http://blog.datomic.com/2012/09/rest-api.html

OrientDB: https://github.com/orientechnologies/orientdb/wiki/OrientDB-...

MySQL http://code.nytimes.com/projects/dbslayer

Riak: http://docs.basho.com/riak/latest/dev/references/http/

And so on.

Of course Anarki also has HTTP Get/Post functionality where Arc 3.1 does not:

https://github.com/arclanguage/anarki/blob/master/lib/http.a...

You would also need to parse the HTTP results.

If you're dead-set on Postgres you could do something like this: http://rny.io/nginx/postgresql/2013/07/26/simple-api-with-ng... Though this seems a little a hackish to me.

Personally I'd use something like FleetDB (http://blackstag.com/blog.posting?id=3) over TCP, before I'd go back to that 1 record = 1 file nonsense.

-----

2 points by lark 4048 days ago | link

Thank you for looking up this work. A database makes a lot of things simpler, like indexes, and particularly joins. Thank you also for pointing out Anarki has HTTP Get/Post, I didn't know this.

Could I ask why you wouldn't go back to saving data in files?

It's a clean, fast solution if one doesn't need joins. Using SQL requires building up strings, or putting together an ORM. Better not go down that path if you don't need them.

https://news.ycombinator.com/item?id=6580834

-----

3 points by thaddeus 4048 days ago | link

> wouldn't go back to saving data in files.

It's not the file structure itself that's the issue, it's mechanisms built around accessing the file(s). For a SMALL HN clone it's probably OK, but if the requirements change even a little you're in for some trouble.

Food for thought: Once your app data can't fit into memory you're going to need to move to a real database or spend big bucks on more hardware. You could do what HN does and purge older data (reloading only when required), but still if your app design requires regular access/writes to older data and it doesn't all fit into memory then your hardware is going to start to thrash (HN had these type of problems from what I read). And with HN, since each record is stored in it's own file, it's going to be exponentially bad.

Also, it's pretty useful to have your dbs and applications running on different servers. Doing this provides other applications running elsewhere to access the data too. So once again it all boils down to your design and expectations, which is why I originally asked if you are just running a HN clone.

FYI I just posted a FleetDB v2 link on the Arc forum if you want to check that option out.

Added: Looks like there's another HTTP library https://github.com/arclanguage/anarki/blob/master/lib/web.ar..., not sure which one is better, but this looks newer.

-----

2 points by lark 4047 days ago | link

Thank you for FleetDB. The ability to execute transactions atomically is useful.

Access to older data is equally a problem for the filesystem and a database. Databases don't get away with it. Once the app data can't fit into memory databases thrash on disk too.

The only argument I see is that save-table saves an entire record in a single file. Wanting access to a specific field of that field means loading the entire record. A lot of databases function this way too, but in general they don't have to. They could bring in memory individual fields. You could claim that as the reason for filesystem access to be slow.

But even then, save-table could be modified to save each field in a separate file.

If the only thing missing from a filesystem is atomicity of multiple transactions, then I'd rather implement just that, in Arc, rather than write an entire database.

-----

2 points by thaddeus 4047 days ago | link

> Thank you for FleetDB.

No problem. Note though that FleetDB is all in memory too. The advantage is that you can:

  1. Put the database on its' own server with dedicated RAM. 
  2. Compose and re-use queries.
  3. Have index support, built in, without code writing.
  4. Have robust/high performance concurrency semantics, not just atomic file writes.
> Databases don't get away with it. Once the app data can't fit into memory databases thrash on disk too.

You're correct in that they too will do seeks to disk, but nowhere close to the same amount when compared to handling each record in it's own file. Just try loading a million files vs 1 file with a million records.

> But even then, save-table could be modified to save each field in a separate file.

Which will exponentially increase the number of disk seeks.

As I stated before it really depends upon your design. Ask yourself: Are you going to have say a million records where you have to run a query againts all records? Would indexes help with that query on that data? Do you really want to code each index into a table then change all your application code to use them? Or would you rather instruct your database to add an index and discover the existing queries already take advantage of them? Note that the last time I checked, HN off loaded that kind of heavy lifting to a third party service[1] that uses[2], you guessed it, a database!

I am not a database expert and I don't want to convince you to use tool that you don't need to use. I'm just saying I see enough advantages in using a database that I don't plan on using file writes.

[1] https://www.hnsearch.com [2] http://www.thriftdb.com/documentation

-----

2 points by akkartik 4047 days ago | link

"I have to say, I don't like that change in making templates into first class objects."

Can you elaborate on why? Since I was the one to make that change, I'm happy to be overruled or to discuss this further to address people's concerns.

-----

2 points by thaddeus 4047 days ago | link

Sure.

It seems to me, the original problem was this: One defines a template with a non-nil value as the default for a fields argument, then upon instantiating the template one wants to pass in a nil value to eliminate that fields default non-nil value from being part of the resulting table.

By making templates first class objects you have permitted these over-rides to occur, but at the same time you have introduced an additional layer of complexity for every user of templates (and in some cases even tables), when they don't care about that specific circumstance.

So at first glance I can't help wonder why not just re-write templates in a manner that eliminates poor handling non-nil values. Then we could have maintained the inst function while also allowing for other methods to be created, such as an 'inst-with' that allows nil values to over-ride defaults (or just have inst allow for over-rides).

I'm not sure how templates are written, but I imagine they make heavy use of tables, which is where the nil pruning is probably occurring. So templates would instead need to use association lists up until the output stage where data then gets pushed into a table.

Or, if instead, you want tables to support nil values, then that's also a change worth considering, but I see the introduction of a whole new type as a means to solve a certain circumstance as overkill with downside effects.

Now I haven't looked at any code to generate my thinking. I only have this to go off: http://arclanguage.org/item?id=15664

-----

1 point by akkartik 4046 days ago | link

You're absolutely right in the historical particulars. Templates started out as light macros atop tables, and their serialization was indistinguishable from that of tables. But they always seemed like a separate type given all the names we've devoted to manipulating them separately from tables. I thought the layer of complexity already existed, and was trying to simplify things so people didn't need to know that templates are built out of tables.

The problem with not supporting nil values in templates -- and therefore in the data models of any webapp -- is that you can't save booleans as booleans. That seems ugly.

Adding new names to paper over problems with existing names seems ugly as well.

Your idea of questioning the design of tables themselves is interesting. I hadn't considered that. A persistent data structure seemed like a separate beast, distinct from a table. But perhaps all tables should be easy to persist. Hmm.

---

My attitude towards anarki has been that it's trying to be the core for a hundred-year language, so it should get the core right without regard to how things were before. So we should avoid unseemly hacks for things that seem simple -- like storing a user pref that's a boolean -- without regard to how commonly they happen, or how easy workarounds are to find.

But I can absolutely imagine that others might have a different perspective on anarki. Let me ask you this: how would you design templates if you were doing so from scratch? I'm trying to get at how much of your argument hinges on compatibility. Perhaps there's a more general question here that we should be discussing. Perhaps we should pay more attention to lark's use case: not wanting to think about arc's internals.

-----

2 points by thaddeus 4046 days ago | link

> is that you can't save booleans as booleans.

Hmm. Well I see that as a separate issue. I think nil and boolean false are very different things. nil means no value or false while false means a false value. In fact I was recently going to suggest that arc should support proper booleans and that arc tables should also store both boolean values, while maintaining it's nil pruning feature. Which really stems from wanting to easily transform to/from json or edn like formats. Currently one has to fudge arc's write-json by passing around symbols for 'true / 'false.

> how would you design templates if you were doing so from scratch?

Well that's interesting that you ask, because I did just that when I implemented templates in Clojure. Of course Clojure supports true boolean values and maps can hold nil values, so it was trivial and very useful.

My Clojure templates are on steroids though. Not only do they match most of the features in arc, but they are cumulative, and accept anonymous functions as field values. The values can also refer to other fields inputs or results.

So for example I just did this one:

  => (deftem :article
      :id        #(UUID) 
      :published true       
      :text      nil 
      :msec      #(msec)
      :seconds   #(seconds (% :msec))
      :date      #(format-date (date (% :msec)) "yyyy/MM/dd HH:mm:ss"))
  
  #result 
  
  => (invoke :article :text "cool" :published nil)
  
    {:id #uuid "08692e96-de87-49aa-9ad1-e33bcd61e712", 
     :text "cool", 
     :msec 1382644173685, 
     :seconds 1382644173, 
     :date "2013/10/24 15:49:33"}
Notice how :seconds and :date use :msec as an input argument? ... Well that's how I would do it. :)

-----

1 point by akkartik 4046 days ago | link

Wow, I'd love to see that code!

Thanks also for the connection to json. There's a lot to consider here.

-----

3 points by thaddeus 4046 days ago | link

The code is

  1. In bad shape (wrote it early on) 
  2. Includes partial features not fully implemented.
  3. Has references to functions I can not split out without  
     creating a bunch of work.
  4. Includes features you would not care for (datomicish).
  5. Has oddities that make you wonder "why like that",
     until you realize you can pass in say a map of args   
     instead. 
With the above reasons, I was going to say I'll pass on releasing the code, but at long as you're ok just getting the scaffolding that will not run for you then here you go:

  (def index* (ref (hash-map)))
  (def templates* (ref (hash-map)))
  (def mutes* (ref (hash-map)))
  (def selfs* (ref (hash-map)))

  (defmacro deftem [name & fields]
    `(let [tem#   (quote ~name)
	   order# (evens (list ~@fields))
	   fmaps# (apply hash-map (list ~@fields))]
	(dosync (alter templates* assoc tem# fmaps#)
	        (alter index* assoc tem# order#)
	    fmaps#)))

  (defmacro defmute [name & fields]
    `(let [tem#   (quote ~name)
	  items# (list ~@fields)]
      (dosync (alter mutes* assoc tem# items#)
        items#)))

  (defmacro defself [name & fields]
   `(let [tem#   (quote ~name)
	  items# (list ~@fields)]
      (dosync (alter selfs* assoc tem# items#)
        items#)))

  (defn invoke-fields
   ([tem base fields allowables]
     (invoke-fields tem base fields allowables nil))
   ([tem base fields allowables testfn]
    (let [fks   (keys fields)
          selfs (@selfs* tem)]
      (reduce
       (fn [m k]
          (assoc m k
	        (if (detect? k fks); must use 'detect' opposed to 'find' for nil vals must be inserted.
		          (aifn (fields k)
	                  (try (it m)
	                    (catch Exception e (it)))
	                  (let [bfn (base k)]
                      (if (and (detect? k selfs)(fn? bfn))
                          (try (bfn (merge m {k it}))
	                          (catch Exception e (bfn)))
                           it)))
              (aifn (base k)
                    (try (it m)
                         (catch Exception e (it)))
                     it))))
		 (hash-map) allowables))))


  (defn invoke [tem & fields]
    (let [temx  (split-name tem)
          tem1  (first-identity temx)
	  atem? (is (last temx) "+")
	  xfn   (type-fn tem)
          temk  (xfn tem1)
	  base  (@templates* temk)
	  prox  (@mutes* temk)
	  fval  (first fields)
	  fmap  (cond (map? fval) fval ; for file loading map of saved records
	              (coll? fval) (apply hash-map fval)
	           :else (apply hash-map fields))
	  imap  (invoke-fields temk base fmap (@index* temk))]
       (reduce
	   (fn [m [k v]]
		   (if (or (missing? k base)(nil? v)(detect? k prox))
	         (dissoc m k)
		       (assoc m (if atem? (nsify temk k) k) v)))
	   	(hash-map) imap)))

-----

2 points by thaddeus 4050 days ago | link

I'm not aware of a save table issue. I just ran this and had no problems:

  arc> (save-table (obj a 10 b 20) "tfile")
  nil

  arc> (load-tables "tfile")
  (#hash((b . 20) (a . 10)))
edit: IMHO I would avoid using save-table/load-table as data storage mechanism. It may work for certain application designs[1], but otherwise you would be better off writing to a real database.

1. http://arclanguage.org/item?id=15419

-----

1 point by akkartik 4047 days ago | link

Why are you concerned about compatibility? Did you have production data with the old version?

The reason I changed it was that the old way was buggy; there were scenarios where a round-trip to disk would corrupt data.

-----

2 points by lark 4046 days ago | link

Yes, I have production data with Arc 3.1.

It's scary to hear saving corrupts data.

-----

2 points by akkartik 4046 days ago | link

Eek. I can see how that would be scary. Though to me it's very good to hear that somebody is doing stuff with arc :)

If you haven't made any changes to templates (created your own, modified the ones in news.arc) you should be safe.

If you decide to migrate to anarki at some point, I'd be happy to help. Make a copy of your data, install it on a new server, and go over things to make sure everything looks good.

-----

1 point by lark 4037 days ago | link

Does this mean Anarki is not backward compatible with Arc?

-----

4 points by rocketnia 4037 days ago | link

In order to really define "backward compatible," you'd have to define Arc in a way that's implementation-independent. In Arc, the code is the spec, so as soon as the code changes, compatibility becomes subjective.

For instance, suppose Anarki defines a new utility and uses it to simplify the implementation of 10 other utilities. (It does this in a few places.) Now suppose my Arc 3.1 code has defined a utility with exactly the same name, and running this code on Anarki causes those other 10 utilities to misbehave, thus wrecking my program. This is a case where Anarki isn't compatible with Arc 3.1, but since it's so easy for me to choose a different name for my utility, it's hardly even worth mentioning. Pretty much any substantial update to Arc would break it in exactly the same way.

There's only one difference between Arc 3.1 and Anarki that's ever gotten in my way, and that's the way Anarki has revamped the [...] syntax to support multi-argument functions. When I say [do t] or [do `(eval ',_)], Anarki treats these as 0-arity functions, and when I say [let (a . b) _ ...], Anarki chokes when trying to search the dotted list for any underscored variables. Once again, this is the kind of change that's pretty easy to work around, and I can't really say Anarki is worse for having this extra functionality.

I'd say Arc platforms are not really portable with each other, in the sense that not all code that works on one platform will work on another. However, I've found it pretty easy to develop my code so it'll work on multiple Arc platforms at the same time.

-----

3 points by akkartik 4037 days ago | link

Yeah we've discussed this before: http://arclanguage.org/item?id=16178

-----