Arc Forumnew | comments | leaders | submit | akkartik's commentslogin
2 points by akkartik 1 day ago | link | parent | on: Try Arc down, will be back up soon

Thanks, Evan!

Remind me, is this running Arc 3.1 or Anarki? Some third option?

reply

3 points by evanrmurphy 1 day ago | link

Hi Kartik, it's Arc 3.1. Cheers.

reply

5 points by akkartik 5 days ago | link | parent | on: Clojure Anaphoric Macros

"In `acond`, `aif` and `awhen`, `%test` or `%t` gets replaced with the 'test' form. `%then` gets replaced by the 'then' form, and `%else` by the 'else' form."

    (aif 9 (+ 9 %else) (+ 10 %test))
    ; => (if 9 (+ 9 (+ 10 9)) (+ 10 9))
"If nested you can access the previous level by doubling the first letter of the symbol. For example, '%ttest' would get you the previous [containing?] 'test' form, while '%eeelse' would get you the 'else' form 2 levels up. In the `aand` and `aor` macros you can reference arguments by using a symbol of form `<star><num>` where 'num' is the 1-index of the argument. Previous levels are accessed by doubling the `<star>` character. So the second 'test' form of an `aand` can by accessed with `<star>2` and the third argument of the previous [containing?] `aand` would be `<star><star>3`." [Working around the crappy pseudomarkdown here.]

    (aand (+ 30 20) *1)
    ; => (and (+ 30 20) (+ 30 20))

    (aand 1 2 "third" (aand 33 **3))
    ; => (and 1 2 "third" (and 33 "third"))
Utterly bonkers, and I love the hack :)

reply

3 points by akkartik 3 days ago | link

Staring at these examples again, another thought occurs to me: do these macros cause multiple evaluation, or are the equivalences above just loose? Can anybody tell? I can't tell just from skimming the implementation.

If they're doing multiple evaluations they're a lot less useful than Paul Graham's original anaphoric macros even if they seem superficially more powerful/expressive.

reply

3 points by i4cu 3 days ago | link

> do these macros cause multiple evaluation

from the read-me:

"Note that as of right now the symbols are replaced by copying in the expression it references, not by binding to a common variable. Hence not suitable for using with expressions that cause side-effects or involve a lot of computation. That will be changed soon."

I presume, by reading this, these expressions will be evaluated each time they are triggered within the operation, so the answer is - yes they will. But, as the read-me also suggests, this is a "WORK IN PROGRESS" and the author has stated intentions to assign variables, which would solve the issue.

reply

2 points by hjek 4 days ago | link

If you include * whitespace * between words and stars, they are not parsed as an emphasized section.

reply

2 points by akkartik 18 days ago | link | parent | on: Show Arc: Debate (very alpha)

Very cool. Can you elaborate on this tantalizing sentence?

> My goal is to make a simple hybrid of HN/Lobsters and Feedly/Google Reader and DropBox/WeTransfer/Google Drive/OneDrive.

What parts/features do you want to take from each, and why do you think they should coexist in a single product?

reply

3 points by hjek 18 days ago | link

> What parts/features do you want to take from each, and why do you think they should coexist in a single product?

DropBox/WeTransfer/Google Drive/OneDrive is just what people use to send larger files. I'm not a fan of those sites, but I have observed that people I know find Filezilla and FTP very intimidating. There's some other free apps for this, but either they're in PHP or they're too complicated to install or use or hack on. Droopy[0] works good for this, so as a start I just cloned what Droopy does; just to check whether Racket/SQLite is fast enough for this use case. (It is)

I like RSS as an idea and the fact that most sites have RSS feeds, but I think it's underappreciated by non-techies, who think that Facebook or Twitter are the only place you can "follow" stuff. For friends of mine to even consider using RSS for anything, I think it needs to be more accessible. I use Newsbeuter for reading RSS but I would never recommend it to non-techie friends, and it has some annoying bugs: Sometimes text is missing, podcast download is flaky at best so I have to copy the url out at get it with curl, and the interface becomes unresponsive when there's many feed items. So I'm thinking of giving it a try to make a web-based RSS reader that encourages sharing and that at very least I'd enjoy using myself. I think this would fit naturally with a News-style app, basically just having RSS feed items being shown alongside other people's posts, but perhaps tweaking the ranking in a way that external feeds don't overshadow local content (assuming that external feed items are votable). Some non-techies have shown me the Feedly app, which (I think) uses RSS, and (I presume) has some kind of social recommendation system, so I just think it would be nice with a free/libre web app doing something similar.

Then, the HN/Lobsters part is basically about having a minimalist web forum with tree structure replies and well thought out ranking algorithms and spam handling, instead of this thoughtless phpBB-style newest-first unstructured mess. One thing Lobsters does right here is that it notifies you when someone replies to a comment of yours. On HN it's possible for someone to reply and you just won't notice because it's so deeply buried in old threads. But I like all the different ways you can sort/view the content on HN.

Also, I've had some painful episodes trying to back up Wordpress sites. It's not pleasant, because there's little distinction between code and content which is stored in a mish-mash of database entries and folders. WP barfs all over the disk. That's something HN does right, because all the content is in the `www` folder. But, then file upload to HN is not really a realistic thing.

So I'm just trying to mix-and-match things I like from different software. A lot of ideas. Haven't gotten far yet! But I guess it's ok to share work-in-progress here.

If I didn't have dockerphobia and mild JS-intolerance, I'd probably just be running Mastodon or something.

[0]: https://github.com/stackp/Droopy/issues?q=is%3Aissue+is%3Acl...

reply

2 points by akkartik 18 days ago | link

I'm wondering:

1. Are you trying to build an RSS reader? Or do you want to support RSS versions of pages on your site?

2. Are you building a file transfer service? Or are you imagining that people will want to share files on a HN-like website? I'm not sure why that would be a common use case.

reply

2 points by hjek 18 days ago | link

> 2. Are you building a file transfer service? Or are you imagining that people will want to share files on a HN-like website? I'm not sure why that would be a common use case.

File upload wouldn't make sense on HN, Arc Forum or Lobsters, and I get that it's entirely on purpose that it's not present on those sites.

But I've been hosting a News site, and those of my friends who use it often submit links to something they've put on Dropbox or Google Drive, so I'm simply observing that this use case is present and that file upload is missing on my site. Perhaps it's because it's a different type of exchanges: a guitarist uploading a recording of a Charlie Parker solo, an artist uploading a drawing, etc.

Well, perhaps I should be using $CRAPPY_PHP_CMS_WITH_FILE_UPLOAD instead of News, but why can't I have well-structured content and file uploads at the same time?

It's also about future extensibility, ways of presenting these file uploads that I haven't thought about yet. I do occasionally write PHP for money. I don't have a very high opinion of that language, and I'd rather make something from scratch in Racket than write it as a Drupal plugin. Given the relative quantities of Drupal plugins and Racket web apps, that might be an unpopular opinion, but it shouldn't seem too strange on this forum. I'd prefer to make custom interactive web sites by extending a code base in Racket rather than PHP.

It's actually also for the use-case of being off-the-grid, yet needing to send/receive files to people in the same room, but not being able to find a USB stick. That situation happens surprisingly often for me. A zero-config web app is useful here because everyone have a browser on their computer (whereas not everyone have FTP / SSH / SMB).

> 1. Are you trying to build an RSS reader? Or do you want to support RSS versions of pages on your site?

I plan to do both. Building a social RSS reader integrated in a News-like app. And this News-like app would provide RSS feeds in the same way Anarki does. I image it wouldn't be too difficult.

reply


Absolutely valid criticism. I'd love to hear more about what you did between seeing the link here and navigating to that example program. I've been trying to build a path to gradually take programmers to an understanding of (this particular unconventional style of) assembly programming. For example, I'm curious how much of the Readme you read, and if you happened to notice that the Readme has an orientation on the x86 processor.

I also made a couple of tweaks to this particular example. I hadn't looked at it in a while. Thank you! https://github.com/akkartik/mu/commit/d6535f3382

reply

2 points by hjek 18 days ago | link

I read the first section of the readme without really understanding much but also not expecting to as I don't know x86 assembly. Then I decided to at least give the examples a superficial look as I'd noticed the word newcomers in the readme. But I couldn't see the number `42` in a program meant to print `42`, so that's where I gave up.

Is this meant to be a tutorial for assembly noobs?

For comparison I think your readme for Wart[0] is more welcoming: Briefly explaining what it is and how to run it, and then straight onto a simple example that people can actually try out.

[0]: https://github.com/akkartik/wart

reply

2 points by akkartik 18 days ago | link

That's a good point. I have similar instructions here, but they're in the second section, 3 screens down..

The audience is assembly-curious programmers, but you aren't expected to know any assembly. I just want to try to hook anyone interested in the goal. If you're interested in a stack you can understand from the ground up, I'm willing to try to explain things to you.

reply

2 points by hjek 18 days ago | link

I might just be outside the target audience, for now. I don't even know any C. Realistically, I think it would have to be spoon-fed to me in some Bret Victor-esque crocodiles and eggs[0] manner for me to not lose focus.

I went through this absolutely fantastic SQL tutorial this week. Perhaps you might find their list of pedagogical principles[1] useful?

I think one thing that potentially could tempt me into low-level code would be making cool tunes[2][3].

[0]: http://worrydream.com/AlligatorEggs/

[1]: https://selectstarsql.com/frontmatter.html

[2]: https://www.youtube.com/watch?v=GtQdIYUtAHg

[3]: https://www.youtube.com/watch?v=qlrs2Vorw2Y

reply

2 points by akkartik 18 days ago | link

I just made some tweaks to the Readme. What do you think?

https://github.com/akkartik/mu/commit/4650c8188f

https://github.com/akkartik/mu/blob/master/subx/Readme.md

reply

2 points by hjek 18 days ago | link

Yea, I think that's more inviting.

I got to try out the point of compiling and trying out your programs now. `ex8`, `ex9` and `ex10` all segfault here.

Some time ago I was at a wedding, and I was terribly bored, until I found out that the guy to my left was writing washing machine software in assembly. In a way it seems awfully primitive, e.g. your `ex11.subx` is 350 lines long and prints out `.......` but I guess in certain systems it's the only option, and what's underneath it all in any system.

reply

2 points by akkartik 18 days ago | link

Thanks for trying them out!

Those examples expect arguments at the commandline, and I chose not to perform error checking for an example :) The focus lay elsewhere for them. See the comment at the top for each.

ex11.subx is running a test for each of those dots :)

reply


Thanks for the questions! Part of the problem is that the repository isn't for a single 'language'. It's for my experiments for a new way to program that makes codebases easier to understand. As the biggest stress test for the new way, I've trying to create a whole stack that is easy to understand. But it's just a prototype, or rather a series of prototypes. I've tried to make each prototype self-contained, with a Readme and instructions for running it that will continue to work to this day.

Prototype 1 was a statement-oriented Assembly-like language built in Arc. You can still see it at https://github.com/akkartik/mu/tree/master/arc. I worked on it until about 2015. It's 3kLoC of code and 7kLoC of tests. The most substantial program I built in it was a hacky Lisp repl with syntax highlighting: https://github.com/akkartik/mu/blob/master/arc/color-repl.mu

Prototype 2 was a similar (but not compatible) language built in C++. It had the most work on it, and I also used it for teaching for a couple of years. I stopped working on it sometime this year. It's still available on the top level at https://github.com/akkartik/mu. It's 23kLoC of C code, and 5kLoC of Mu libraries. The most substantial program built in it was a 2-pane programming environment I used for teaching programming: https://github.com/akkartik/mu/tree/master/edit#readme. It's 12k lines of Mu code, about half of which is tests. You can see its sources colorized to be easier to read at the bottom of http://akkartik.github.io/mu.

Now I'm at prototype 3, SubX. It's in a very preliminary state. As above it is not intended to be compatible with existing prototypes. The previous prototypes were kinda-sorta designed to be easy to translate to native code, but they were still simple-minded tree-walking interpreters. I had hazy plans of gradually compiling them to native code from the top down, but that turns out to be beyond my ability. SubX starts from the bottom up, building an almost trivial syntax on top of raw native x86 machine code, and I'm gradually learning how to implement a compiler in it. Most people would say it's too hard to build a compiler in assembly these days when we have so many high level languages available. I'd like to see if it's manageable with the right framework for writing automated tests. So SubX starts out not with improvements to syntax, but to error checking and automated testing.

The most substantial program I've built in SubX so far is a port of the very initial version of Crenshaw's "Let's build a compiler" series[0]. All it does is read a number and emit native code to return that number in the exit status: http://akkartik.github.io/mu/html/subx/apps/crenshaw2-1.subx.... So SubX is still in a very early state.

SubX tests aren't inside the functions they test, they're just interleaved in the same file. Any label that doesn't start with '$' is the start of a new function. Functions that start with 'test-' are tests, and they all run when you run with a 'test' argument on the commandline.

The hope is to one day build a robust, hackable stack culminating in a high-level Lisp atop this infrastructure, a stack that leans into the Lisp tendency to fragment dialects by encouraging people to create incompatible forks -- while also making it easy (but not automatic!) to share code between the incompatible forks. It would still be some amount of work to copy the tests over and then make them pass, copying over bits of code at a time and modifying it as necessary. That's the sort of workflow I want to encourage rather than blindly upgrading software by running a package manager command. But before I can recommend it to others I have to see if I can get it to work. This repo is a test bed for eventually building tools to help people collaborate across incompatible forks.

Thanks for asking these questions! They're very helpful in understanding how others see the mess my repo has turned into. I'm going to try cleaning it up.

[0] https://compilers.iecc.com/crenshaw

reply

3 points by i4cu 18 days ago | link

Ah... OK.

See I always move up to the top level directory and work my way down. So as I did this my reference point for understanding got completely mixed up.

I'd make a top-level dir called 'prototypes' with details like [1] you provided in this comment and then branch down.

* Also, I'd probably limit referencing Mu, specifically, in SubX. I don't think its helpful. Just remove this line:

  "We'll gradually port ideas for other syscalls from the old Mu VM in the parent 
  directory."
  
It doesn't add much value and takes people's attention away as they start trying and understand the relationship. If you need to just add notes at the bottom.

[1] "The hope is to one day build a robust, hackable stack culminating in a high-level Lisp atop this infrastructure, a stack that leans into the Lisp tendency to fragment dialects by encouraging people to create incompatible forks -- while also making it easy (but not automatic!) to share code between the incompatible forks."

Re: [1]

Yeah, if you can build a language that allows someone to build arc and even a clojure version of arc using the same base code, that would be cool. I'd immediately try spinning up a new version of arc with clojure's tables and table functions :)

reply

3 points by akkartik 18 days ago | link

> I'd make a top-level dir called 'prototypes'... and then branch down.

Yeah, I've been planning a reorganization like that. Unfortunately the reorg is going to break all the links I shared before I thought of this :/ So I'm going to wait a bit before I make the switch.

> if you can build a language that allows someone to build arc and even a clojure version of arc using the same base code, that would be cool.

I'm not aiming quite there. That would be really hard to do, and then it would be impossible to keep in sync with existing language upstreams over time, given the underlying platform will be very different. The way I imagine providing something similar is this: there would be multiple forks of the Mu stack for providing a Clojure-like or Arc-like high-level language. But these languages wouldn't be drop-in replacements for real Arc or real Clojure. Also, each fork would try to minimize the number of languages it relies on to do its work, so you wouldn't immediately be able to run both Clojure and Arc on a single stack. Because every new language used in a codebase multiplies the comprehension load for readers. (I ranted about this before at https://lobste.rs/s/mdmcdi/little_languages_by_jon_bentley_1...)

Basically, I want to commoditize the equivalent of a Lisp Machine for any language. My goal is to help people collaborate across incompatible forks, _but_ the forks have to all have certain characteristics that no existing software has (because it all assumes rigid compatibility requirements).

reply

3 points by i4cu 18 days ago | link

> Basically, I want to commoditize the equivalent of a Lisp Machine for any language. My goal is to help people collaborate across incompatible forks...

I'm struggling to understand so forgive me, but the reasons why and what you're doing seem to change or at least are many fold and thus hard to unpack. Or maybe the different prototypes are messing me up.

So let me see if I can unpack it (at least for myself :)

Your goals are:

1. To build prototype 'x' compiler/language that's more robust and easier to maintain because it has been built with testing capabilities in mind (I'm imagining a model with convenience features or set requirements to accomplish this).

2. Build prototype 'x' to permit developers to build their own compiler/language(s) that inherit the benefits from prototype 'x', thus making that process more enjoyable and more likely to succeed.

3. Permit/Encourage greater collaboration amongst developers, on separate prototype 'x' projects, because prototype 'x' is robust and developers are working under a shared model that has core concepts/features that act as a bridge for that collaboration to happen.

Does this seem right?

So is this a project that you're doing because you're passionate about it and you think it can change things (i.e. improve the lives of other people)? Or is this a product idea where you have assessed there's a need and you're going to fill it?

reply

4 points by akkartik 18 days ago | link

Thanks for the probing questions! Yes, I don't mean to move the goalposts on my reasons. I feel like I'm reaching for something fundamental that could end up having lots of different benefits, mostly things I can't anticipate.

I don't actually care that much what the high level language is at the top. I'm biased toward Lisp :) so that's what I'm going to build towards. But if others want a different language I want to make it easy to switch the superficial syntax to suit their taste -- and convert all existing code on the stack so everything is consistent and easy to read. If some others want to add new runtime features, I want to make that easy too. Finally, I want it to be tractable to mix and match syntax and runtime features from different people and still end up with something consistent. Using tests at the bottom-most layers, and building more rigorous type systems and formalisms as necessary higher up.

The key that would make this (and much else) possible is making the global structure of the codebase easier to comprehend so that others can take it in new directions and add expertise I won't ever gain by myself, in a way that I and others can learn from.

Ignore the other prototypes in this repo; they're just details. The goal I'm working toward is a single coherent stack that is easy for others to comprehend and modify.

This isn't a product, in the sense that I can't/won't charge money for it. I'm not really making something others want right now. I'm trying to make something I think the world needs, and I'm trying to make the case for something the world hasn't considered to be a good idea yet. I'm sure I don't have all the details nailed down yet :)

reply

3 points by i4cu 18 days ago | link

> I'm trying to make something I think the world needs, and I'm trying to make the case for something the world hasn't considered to be a good idea yet. I'm sure I don't have all the details nailed down yet :)

"To boldy go where no man has gone before"

"Second star on the Left"

I can get behind that :)

reply

4 points by akkartik 21 days ago | link | parent | on: Ask: When to use lists vs tables?

Lists are easy to split and merge, insert and delete nodes from, but searching them is slow. Tables are fast for searching and insertion and deletion, but they don't naturally encode ordering relationships.

Check out this chapter: http://www.gigamonkeys.com/book/they-called-it-lisp-for-a-re... Tables are often great for things Lisp historically used lists for. But the one thing they can't do is represent code and trees in general. That's where cons cells come in.

reply

3 points by krapp 21 days ago | link

>But the one thing they can't do is represent code and trees in general. That's where cons cells come in.

They can if you can nest them, or include references to other keys (like foreign keys in SQL or something) It's inefficient but it's possible. You can even represent trees in a flat array if you work hard enough.

Although maybe I should be pedantic and clarify that you can represent trees with tables but not in tables natively, of course.

reply

2 points by akkartik 21 days ago | link

You got me, I was careful to use the word 'natural' in the first paragraph but I forgot to do so in the second ^_^

reply

3 points by akkartik 27 days ago | link | parent | on: Ask: Why is there no "save-list"?

As i4cu pointed out, it's because write can't handle tables. There _is_ a save-list, but it's called writefile, and it works not just on lists but most primitive datatypes. However, it doesn't work on tables.

The naming isn't very consistent here. Arguably writefile should be called save-file. We should preserve the write prefix for operations that write to opened Racket file ports (https://docs.racket-lang.org/reference/file-ports.html) and save for operations that write to provided filenames. That convention is followed by save-table and write-table.

And of course we could simplify names still further if we took i4cu's suggestion and had write work with tables.

reply

3 points by kinnard 27 days ago | link

Is it straightforward to change them to be consistent?

reply

2 points by akkartik 27 days ago | link

Yup, just rename the macro and search-and-replace all calls. Then try running all tests! (Updating incompatibilities at the top of arc.arc is also appreciated.)

reply

2 points by hjek 27 days ago | link

Yep. Or change `save-table` to be an extension of `writefile` using `defextend`:

   (defextend writefile (val name) (is (type val) 'table)
     [the body of the save-table function here]

reply

2 points by rocketnia 27 days ago | link

Even if you merge `writefile` and `save-table` like this, but not `readfile1` and `read-table`, then people still need to know, at development time, what type of data is in the file in order to read it, so they might as well use a type-specific way to write it as well. Unfortunately, merging `readfile1` and `read-table` isn't really possible, since their serialized representations overlap; they can't reconstitute information that was never written to the file to begin with.

From a bigger-picture point of view, this seems like it would become a non-issue once Arc had its own reader. I assume the problem with reading tables using `read` is that Racket's reader constructs immutable hashes. An Arc-specific reader would naturally construct Arc's mutable tables instead.

Doesn't Racket's reader give us similar problems in that it reads immutable strings and cons cells, too? So these problems could all be approached as a single project.

In the short term, it's not a project that would need a whole new reader. It could just be an adaptation of Racket's existing reader... something like this:

  (define (correcting-arc-read in)
    (let loop ([result (read in)])
      (match result
        
        ; TODO: See if this should construct a Racket mutable cons cell
        ; instead (`mcons`). Right now this just creates an immutable
        ; one, which should be fine since Arc uses an unsafe technique
        ; to mutate those.
        [(cons a b) (cons (loop a) (loop b))]
        
        [ (? hash?)
          (make-hash
            (map
              (match-lambda [(cons k v) (cons (loop k) (loop v))])
              (hash->list result)))]
        [ (? string?)
          ; We construct a new mutable string with the same content as
          ; `result`.
          (substring result 0)]
        ; We handle tagged values, which are represented as mutable
        ; Racket vectors.
        [(? vector?) (list->vector (map loop (vector->list result)))]
        
        ; We handle various atomic values. (TODO: Add more of these
        ; cases until we've accounted for every writable type Arc
        ; supports. Alternatively, just make this a catch-all
        ; `[_ result]`.)
        [(? number?) result]
        [(? symbol?) result])))
Writing Arc's `queue` type might be tricky, since that representation relies on sharing. It's possible queues (and other tagged values in general) should have a customized read and write behavior.

reply

3 points by hjek 27 days ago | link

> I assume the problem with reading tables using `read` is that Racket's reader constructs immutable hashes.

Racket also has mutable hashes created using `make-hash`[0] rather than `hash`. It could just be that the tables are not serialised as something Racket reads as mutable hashes when deserialising it back again?

[0]: https://docs.racket-lang.org/reference/hashtables.html#%28de...

reply

3 points by rocketnia 27 days ago | link

"It could just be that the tables are not serialised as something Racket reads as mutable hashes when deserialising it back again?"

I'm pretty sure the Racket reader never reads a mutable hash, but that it's possible for a custom reader extension to do it.

Some of Racket's approach to module encapsulation depends on syntax objects being deeply immutable. In particular, a module can export a macro that expands to (set! my-private-module-binding 20) but which uses `syntax-protect` so that the client of that macro can't use the `my-private-module-binding` identifier for any other purpose. If the lists constituting a program's syntax were usually mutable, then it would be hard to stop the client from just mutating that expansion to make something like (list my-private-module-binding 20), giving it access to bindings that were meant to be private.

I think this is why Racket's `read-syntax` creates immutable data. As for why `read` does it too, I think it's just a case of `read` being a relatively unused feature in Racket. They don't have many use cases for `read` that they wouldn't rather use `read-syntax` for, so they don't usually have reasons for the behavior of `read` to diverge from the behavior of `read-syntax`.

All this being said, they could pretty easily add built-in syntaxes for mutable hashes, but I think it just hasn't come up. Who ever really wants to read a mutable value? In Racket, where immutable values are well-supported by the core libraries, you aren't gonna need it. In the rare case you want it, it's easy enough to do a deep traversal to build the mutable structure you're looking for (like my example `correcting-arc-read` does).

It only comes up as a particular problem in Arc. Arc's language design doesn't account for the existence of immutable values at all, so working around them when they appear can be a bit quirky.

reply

3 points by hjek 27 days ago | link

> I'm pretty sure the Racket reader never reads a mutable hash, but that it's possible for a custom reader extension to do it.

No..?

    > (require racket/serialize)
    > (define basket (make-hash))
    > (hash-set! basket 'fruit 'apple)
    > (write-to-file (serialize basket) "basket.txt")
    > (define bucket (deserialize (file->value "basket.txt")))
    > (hash-set! bucket 'fruit 'banana)
    > bucket
    '#hash((fruit . banana))

reply

4 points by rocketnia 26 days ago | link

The Racket reader reads a seven-element list there, not a mutable hash.

Looks like you're proposing to use a two-stage serialization format. One stage is `read` and `write`, and the other is `serialize` and `deserialize`. At the level of language design, what's the point of designing it this way? (Why does Racket design it this way, anyway?)

I can see not wanting to have cyclic syntax or syntax-with-sharing in the core language just because it's a pain in the neck to parse and another pain in the neck to interpret. Maybe that's reason enough to have a separate `racket/serialize` library.

But isn't the main issue here that Arc's `read` creates immutable tables when the rest of the language only deals with mutable ones? The mutable tables go through `write` just fine, but `read` doesn't read the same kind of value that was written out. If and when this situation is improved, I don't see where `racket/serialize` would come into play.

reply

2 points by hjek 25 days ago | link

> Looks like you're proposing to use a two-stage serialization format.

I wasn't really proposing anything, only pointing out that it's not the case that Racket never can read mutable hashes, and then illustrating that with a code example.

reply

3 points by rocketnia 25 days ago | link

Hmm, all right. I didn't want to believe that was the point you were trying to make. In that case, I think I must not have conveyed anything very clearly in the `correcting-arc-read` comment.

I've been trying to respond to this, which was your response to that one:

"Racket also has mutable hashes created using `make-hash` rather than `hash`. It could just be that the tables are not serialised as something Racket reads as mutable hashes when deserialising it back again?"

In the `correcting-arc-read` comment, I used `make-hash` in the implementation of `correcting-arc-read`, so I assumed your first sentence was for the edification of others. I found something to respond to in the second, which kinda pattern-matched to a question I had on my mind already ("Can't the Racket reader just construct a mutable table since what was written was a mutable table?").

As for my response to your "No..?"...

One of the purposes of `correcting-arc-read` is that (when it's used as a drop-in replacement for Arc's `read`) it makes `readfile1` return mutable tables. So if anyone had to be convinced that a reader that returned mutable hashes could be implemented at all in Racket, I thought I had shown that already. When it looked like you might be trying to convince me of something I had already shown, I dismissed that idea and thought you were instead trying to clarify what your proposed alternative to `correcting-arc-read` was.

Seems like I've been making bad assumptions and that as a result I've been mostly talking to myself. Sorry about that. :)

Maybe I oughta clarify some more of the content of that `correcting-arc-read` comment, but I'm not sure what parts. And do you figure there are any points you were making that I could still respond to? I'd better not try to assume what those are again. :-p

reply

3 points by kinnard 25 days ago | link

Sounds like I opened a can of . . . lists . . .

reply

3 points by akkartik 27 days ago | link

I have an old fork (https://github.com/akkartik/arc) that has an extensible generic pair of functions called serialize and unserialize which emit not just the value but also tagged with its type. read and write are built atop them.

reply

1 point by kinnard 24 days ago | link

I don't think I can handle this while I'm building the app I'm working on . . .

reply

2 points by akkartik 35 days ago | link | parent | on: Knark - rewrite in plain Racket?

This is the Rich Hickey talk from 2016: https://www.youtube.com/watch?v=oyLBGkS5ICk. If you listen to it, it was made in an effort to adjust the community's trajectory. I too would like to hear if adjustments did happen.

Basically the way I interpreted it[1] is that "major version" is a meaningless concept. If you're making incompatible changes, rename the package. If that's a disincentive to making incompatible changes -- great!

[1] http://akkartik.name/post/versioning

reply

2 points by hjek 34 days ago | link

Nice blog post.

> Rich Hickey pointed out last year that the convention of bumping the major version of a library to indicate incompatibility conveys no more actionable information than just changing the name of the library.

Yes, I guess Hickey is applying this idea of immutability not only to data structures but also to APIs and even databases[0] with the proprietary database service Datomic. Interestingly Datomic uses Datalog as query language, so it's straight forward to apply at least some of those ideas with the Racket Datalog package[1].

As of now I have a basic web forum working with all data storage done in Datalog (except for file uploads!). I'll post some code once the design is a bit more settled. It's still in the breaking-things-all-the-time phase.

What I find a bit tricky about Datalog is that relations are stored together with other facts, which in my mind feels a bit like storing code in a database, but maybe I just haven't wrapped my head around it yet.

Does anyone here have favourite articles or talks about logic programming?

[0]: https://www.youtube.com/watch?v=EKdV1IgAaFc

[1]: https://docs.racket-lang.org/datalog/interop.html

reply

3 points by i4cu 34 days ago | link

> What I find a bit tricky about Datalog is that relations are stored together with other facts, which in my mind feels a bit like storing code in a database, but maybe I just haven't wrapped my head around it yet.

I can't speak for Datalog, but I've used Datomic.

If it's the same, then a 'fact' is comprised of an entity (the id), + an attribute, + a value.

The relationships are made by storing an entity id into the value slot of another fact. Thus the model is both flat (being a list of facts) and hierarchical (they can point to each other). It pretty much becomes a graph database. Is that what you mean?

reply

2 points by hjek 34 days ago | link

> I can't speak for Datalog, but I've used Datomic.

Cool!

> The relationships are made by storing an entity id into the value slot of another fact. Thus the model is both flat (being a list of facts) and hierarchical (they can point to each other). It pretty much becomes a graph database. Is that what you mean?

Yes, exactly! What I worry about (because I'm fairly new to logic programming) is whether it could potentially be difficult to update a program where storage of business logic and storage of data aren't separated?

If we assume we have a Hacker News web app where we have a fact: One day Alice submits a story with the title "How to peel onions"; and we are thinking "Why on earth did she post that here?!?" So we add this relation to our code: A story is `irrelevant` if it has the word "onion" in the title. Then, the next day we get another fact: Now Bob has submitted a story called "How onion routing works". This new story by Bob then makes us reconsider our definition of `irrelevant`.

In a typical imperative program we'd just edit the code and redefine the `irrelevant` predicate, and it would take effect next time we run the program (or instantly if we enter it at a repl). But here in our logic program we store this `irrelevant` relation in our graph database, so even though we have removed it from our code, it is still sitting there in the database along with all the facts, outside the reach and responsibility of git, or whichever VCS we're using.

Yes, so my question is: How do you practically deal with changes to business logic in logic programming where data storage and relation storage is one and the same? Perhaps Datomic just avoids this issue somehow? I may also be missing or misunderstanding something.

reply

3 points by i4cu 34 days ago | link

Can't say I know what the options are since I don't know Datalog or the DB you're using, but is this reasonable?:

  -----------------------------------------------------------
  Entity              | Attribute | Value
  -----------------------------------------------------------
  person-id-001       | name      | Alice
  person-id-001       | stories   | [story-id-001, story-id-002...]   

  story-id-001        | headline  | "How to peel onions"
  irrelevant-word-001 | stories   | [story-id-001, ...]
  irrelevant-word-001 | word      | onion
  -----------------------------------------------------------
So if you decide that onion is no longer irrelevant then delete the entity 'irrelevant-word-001'. Which seems, at least to me, better than making code pushes.

So all of this assumes a few things:

- Your DB supports a cardinality of 'many' items in the value slot.

- Your query language can perform joins.

Of course none of this helps when someone changes a headline, but only full-text search DB's will help you do that.

Edit: made edits.

reply

3 points by i4cu 34 days ago | link

To be complete (and somehow I edited this out):

  -----------------------------------------------------------
  Entity        | Attribute          | Value
  -----------------------------------------------------------
  globals       | irrelevant-words   | [irrelevant-word-001, ...]   
  -----------------------------------------------------------
So you would also need to remove the value 'irrelevant-word-001' from the above. At least this is how I would do it in Datomic anyway.

What's interesting (at least to me) is that Datomic has a function called 'retractEntity' [1] which auto-magically removes all references of an entity in any value slot when you retract the entity. Man I love Datomic :)

[1] https://docs.datomic.com/on-prem/transactions.html#dbfn-retr...

reply

2 points by hjek 34 days ago | link

> So if you decide that onion is no longer irrelevant then delete the entity 'irrelevant-word-001'. Which seems, at least to me, better than making code pushes.

I can make sense of that when there's just one instance of this app running. Yet imagine the scenario where the web app has been published, and suddenly other people are running this web app. If the business logic is then changed, somehow I'd have to tell those people: "Oh btw, when you're running `git pull` next time, then you just also gotta run this query to retract some of the old relations from the database."

Definitely not a problem for me yet, but I can just smell it coming. I could add those retractions to the code, but they would have to stay there indefinitely, because it's not possible to tell if those retractions have taken place on everyone's databases yet.

Maybe I'm over-thinking this.

> What's interesting (at least to me) is that Datomic has a function called 'retractEntity'

Looks like the one called `~` in Racket's Datalog[0].

[0]: https://docs.racket-lang.org/datalog/interop.html#%28form._%...

reply

2 points by i4cu 34 days ago | link

Quick questions before going further.

Is this irrelevant-word example a real feature you're building into the app or a contrived example to understand Racket DataLog DB use?

If it's a real feature, and I'm assuming it is. Then I'm also assuming that when a story is submitted you're parsing the title and adding the relationship to the current set of irrelevant-words that are stored.

So the question's are:

1. How are you going to remove the past relationships between stories and an irrelevant word that's getting removed? (looks like we've answered this).

2. How are you going to make sure past stories gain the relationship to newly added irrelevant words?

3. How are you going to handle title changes.

After you get handle on these then what you really need to do provide an interface, from within the apps admin tools, to trigger the noted functionality. This way the business logic is in the app and it's modifying the data.

reply

2 points by hjek 33 days ago | link

> Is this irrelevant-word example a real feature you're building into the app or a contrived example to understand Racket DataLog DB use?

It's a simplified and slightly contrived illustration of an issue in this pre-alpha code I haven't published yet, perhaps just because I haven't thought of a name for the project yet. But yes, let's assume it's a real feature for now.

> 3. How are you going to handle title changes.

I like Hickey's idea of accretion of data - with a timestamp! - and not forgetting previous facts. I think he's talking about it in The Database as a Value[0]. So, a story could have a few different titles in the database, and the newest one is the one you get to see.

The thing is, it's easy to add a timestamp to facts as a way of not considering old facts without forgetting them, but not to relations. For example in Racket Datalog[1]:

    (! (voted "i4cu" 'up 134))
can easily be get a timestamp:

    (! (voted "i4cu" 'up 134 1542151773))
but that would not really make sense in a relation like

    (! (root A B) (ancestor A B) (parent null A))
So, I don't feel there's a need for ever retracting facts, because timestamps solve that. (Even when deleting something, you could just add the fact that is has been deleted.) But with relations (a.k.a. business logic) I think I will need to retract things, which is tricky because this logic is not only present in the code but also in the database. This was the problem I was asking about.

> 2. How are you going to make sure past stories gain the relationship to newly added irrelevant words?

Datalog queries reflect the current set of facts and relations, so the possible irrelevance of a story would not be stored anywhere, so it wouldn't need to be updated.

> 1. How are you going to remove the past relationships between stories and an irrelevant word that's getting removed?

Some as above. For example, in a place oriented program a story object could have a boolean attribute `irrelevant?`, whereas I'm sending a query every time this value is needed, so no stale `irrelevance` attributes are stored anywhere.

> After you get handle on these then what you really need to do provide an interface, from within the apps admin tools, to trigger the noted functionality. This way the business logic is in the app and it's modifying the data.

Yes, that is kind of there already, as in having functionality for changing titles. The `irrelevant` functionality is not there now.

Ok, I think I just need to work on getting this code publishable, because it might be easier to discuss tangible examples.

[0]: https://www.infoq.com/presentations/Datomic-Database-Value

[1]: https://docs.racket-lang.org/datalog/interop.html

reply

2 points by i4cu 33 days ago | link

Yeah, Datomic doesn't expect the relationships to be stored in the DB. It stores a bunch of indexes for you and it has a great query language, but that's it.

So where will you're data be? In a local data structure? I read your racket Datalog link, but it doesn't show any details for the database side (i.e. durability etc.) even though it's labelled a database.

Also, I'm curious what made you choose a graph db. It seems like you're inheriting a lot of complexity and I'm wondering what the benefit is over a more traditional sql or nosql db.

reply

2 points by hjek 33 days ago | link

> So where will you're data be? In a local data structure?

Yes, I think. The database just stored in memory but it can be serialized and saved to the disk using `write-theory`[0] and loaded `read-theory`. That is what I'm doing for now, and it's a very naive and inefficient to do a full database dump rather than just appending new data, and I presume it's particularly in this area where Datomic is way more optimised and well thought out.

> Also, I'm curious what made you choose a graph db. It seems like you're inheriting a lot of complexity and I'm wondering what the benefit is over a more traditional sql or nosql db.

Well, I did the initial work on the web app: creating user accounts, adding posts and replies, and then I got to data storage. Initially I did a News-style flat-file database, just saving data as lists in files, that are then loaded into memory when the program starts. It mostly worked but also felt a bit complicated, and I thought that perhaps I should just use a proper database?

What I like about news.arc is that you can just launch it without any configuration, so MySQL and PostGreSQL were out of the question, and I started reading a bit about SQLite. But I've also had this fascination with logic programming, from what people are posting here[1][2], and from reading a bit of The Reasoned Schemer, and I watched some of those Rich Hickey talks again, where he talks about Datalog, which happens to be available for Racket.

There are just some things that are incredibly simple in declarative/logic programming. For example, if you have facts about stories being `parent` of their replies, then it's simple to just define the `ancestor` relation, and when you have the `ancestor` relation, you automagically get `descendants` without having to write any code, because it's just the inverse of `ancestor`:

    (! (:- (ancestor A B)
           (parent A B)))
    (! (:- (ancestor A B)
           (parent A C)
           (ancestor C B)))
But, I've also bumped into some questions - more practical than theorical - and that is why it's interested hearing about your experience with Datomic, and why I'm asking here.

So, SQLite is still on the table. I'm not too familiar with NoSQL, but my impression is that they are all about speed and scalability of data storage. I haven't used MongoDB but isn't it essentially just like storing JSON in a file, except faster? It would be interesting if any of those could be used in conjunction with Datalog though, if don't add too complexity for the sake of increased speed.

[0]: https://docs.racket-lang.org/datalog/interop.html#%28def._%2...

[1]: http://arclanguage.org/item?id=20650

[2]: http://arclanguage.org/item?id=20519

reply

2 points by i4cu 32 days ago | link

So a few things I wanted to point out:

Datomic vs. DataLog

Datomic uses DataLog as part of its query language, but that's pretty much where the comparison should end. Things like "treating the database as a value", and features such as data accretion that Rich talks about have nothing to do with DataLog. They're features of Datomic. So for example when you mention never retracting data, well your data size is going to continuously grow unless you write your own data management layer on top. Datomic, on the other hand, does this for you. When you want to query the database over time, then you're going to need to store time intervals for all of your data and incorporate that into each query. Where as in Datomic (which has a time log) you can pass in the DB itself as a value (with an associated time interval) and Datomic will make sure your queries are working against the dataset that accounts for the time interval.

I'm pointing this out because it seems to me that you're doing (or are going to be doing) a lot of work that may not be worth it for what you're trying to accomplish.

Nosql

> I'm not too familiar with NoSQL, but my impression is that they are all about speed and scalability of data storage.

Yes and No. Often speed can be a feature Nosql dbs advertise, but really, for me anyway, it's about flexibility and ease of use. Traditional RDBMS, for example, require creating schemas. Many Nosql databases don't require a schema at all which makes it easier to use and more flexible to change. Nosql's are often a key-value store so it can be really easy to take a hash-map or table of data from your code and just dump it into an nosql datastore and be able to query it.

My personal favourite is Redis and it might be worth considering for your app.

You can:

- store a value under a key [1]

- store table data [2]

- store values in a set [3] (which allows intersection/difference queries)

- store values in a sorted-set [4] (which allows you query by some numerical value like timestamp)

- use it to manage relationships [5]

The reasons I mention Redis is that the HN app is very well suited to it. HN only keeps 'x' amount of data in memory. And in Redis the data lives in memory. Also Redis allows you to set expiry times on data for auto eviction [6]. And Redis also supports ordered lists [7] which can make it useful for lisp based languages.

However it's not embedded. And if that's a requirement I'd almost suggest you move away from Racket and adopt a language that has more options for embeddable databases. I guess if you're willing to roll your own (and it looks like you may be) then that's awesome too.

But in case you decide otherwise... The library I use is Redis Carmine [8], but there are Racket clients [9].

1. https://redis.io/commands/set

2. https://redis.io/commands/mset

2a. https://redis.io/commands/mget

3. https://redis.io/commands/sadd

4. https://redis.io/commands/zadd

5. search: "Representing and querying graphs using an hexastore" https://redis.io/topics/indexes

6. https://redis.io/commands/expire

7. https://redis.io/commands/lset

8. https://github.com/ptaoussanis/carmine

9. https://redis.io/clients#racket

reply

2 points by hjek 32 days ago | link

> Datomic uses DataLog as part of its query language, but that's pretty much where the comparison should end. Things like "treating the database as a value", and features such as data accretion that Rich talks about have nothing to do with DataLog.

I'm not sure I totally agree with this. I think that apart from talking about the design of Datomic, he also has a more general point against what he calls PLOP (PLace Oriented Programming), which Datalog does address.

For example in plain Racket a value is lost if something else is put in its place:

    > (define foo 'bar)
    > (define foo 'baz)
    > foo
    'baz
In Datalog you just accrete facts:

    > (! (is foo bar))
    > (! (is foo baz))
    > (? (is foo X))
    is(foo, bar).
    is(foo, baz).
Hickey is also mentioning how git doesn't do PLOP in that it doesn't throw out your commit history (without you asking it to do so).

> The reasons I mention Redis is that the HN app is very well suited to it. HN only keeps 'x' amount of data in memory. And in Redis the data lives in memory. Also Redis allows you to set expiry times on data for auto eviction [6].

Interesting. Just checked news.arc, and yes `initload*` is set to 15000. Interesting idea from Redis with expiry times. I'll check it out. I hadn't considered the scenario of storing text enough to max out on memory, because it would probably be premature optimisation, but good to keep in mind. I'd like to give Redis/Rackdis a try; thanks for the suggestion. I've been hosting an Etherpad Lite instance, and Redis was painless to setup.

> I'm pointing this out because it seems to me that you're doing (or are going to be doing) a lot of work that may not be worth it for what you're trying to accomplish.

Yes, my priorities here are definitely to make the code as brief and simple as possible, and to not have to do to much work. With plain Datalog it's very little work to timestamp a fact, and it's also kind of necessary, e.g. to figure out which fact is most recent, when previous facts are not removed. I'm just trying to get the gist of Hickey's ideas here.

reply

2 points by i4cu 32 days ago | link

> PLOP (PLace Oriented Programming), which Datalog does address.

Yeah, I was thinking more along the lines that Datomic has built-in functionality to address the caching, cache eviction, and indexing that goes along with all that data accumulation. But you're correct, DataLog does accumulate facts.

> Interesting. Just checked news.arc, and yes `initload*` is set to 15000.

I did the same thing, about 6 or 7 years ago, that you're doing now. I ported HN to Clojure (which is actually how I learned Clojure). If memory serves me correctly when I was doing the work I realized I needed a real DB if I wanted to support load balancing. i.e. I needed to centralize the data for the authentication and fnid session info. I think Arc calls them fnids... You probably know better than I do now, but Arc has all this code to expire these session fnids and so, for me, Redis was just a good fit for that task.

Anyways, I'll be sure to take a look at the final result of your work.

Cheers.

reply

2 points by hjek 32 days ago | link

> I did the same thing, about 6 or 7 years ago, that you're doing now. I ported HN to Clojure (which is actually how I learned Clojure).

Cool!

> I needed to centralize the data for the authentication and fnid session info. I think Arc calls them fnids... You probably know better than I do now, but Arc has all this code to expire these session fnids and so, for me, Redis was just a good fit for that task.

The Racket web server is quite "batteries included" and comes with these different managers[0] for dealing with expiration of sessions/continuations, such as the LRU manager:

> The memory limit is set to `memory-threshold` bytes. Continuations start with 24 life points. Life points are deducted at the rate of one every 10 minutes, or one every 5 seconds when the memory limit is exceeded. Hence the maximum life time for a continuation is 4 hours, and the minimum is 2 minutes.

> If the load on the server spikes—as indicated by memory usage—the server will quickly expire continuations, until the memory is back under control. If the load stays low, it will still efficiently expire old continuations.

[0]: https://docs.racket-lang.org/web-server/servlet.html?q=respo...

reply

2 points by i4cu 31 days ago | link

> If the load on the server spikes...

When I was referring to load balancing and centralizing the data I was referring to many web servers sharing a centralized/external source for auth/session data.

I'm unfamiliar with racket's web server 'servlets'. The docs are little unclear (at least to me). Can these servlets live on a separate server so that the data can be shared between web servers? I'm guessing that was/is not a requirement for you, but I'm just interested in knowing if that's how it can work.

Uh oh, you're getting me interested in Racket now. I can't have that... I have too many projects :)

edit: I guess at the end of the day these servlets are web-servers right, so you can, even if you have to do it over http and build an api.

reply

2 points by hjek 31 days ago | link

> Can these servlets live on a separate server so that the data can be shared between web servers?

Probably. I assume that serializable continuations[0] from stateless servlets can just be stored wherever, like in Redis or something, instead of in the memory of one server.

> I ported HN to Clojure

If that is something you have published, it'd be fun to see, whether it's finished or not.

> Uh oh, you're getting me interested in Racket now.

My impression is that Clojure is faster, less verbose partly due to clever syntax and provides more immutable data structures than Racket. But when it comes to documentation and error messages, I find Racket more coherent and comprehensible.

Say, if I wanted to connect to a SQL databse, with Racket I'd use the DB module[1], end of discussion. But with Clojure there's Korma, ClojureQL, Persist, HoneySQL, Yesql, a JDBC wrapper from Clojure contrib, SQLingvo, oj, Suricatta, aggregate, Hyperion, HugSQL, and probably a few more[2][3]. That multitude of libraries with similar purpose may be useful in some cases, sure, but also potentially a bit overwhelming for beginners, so I guess that's why I found it easier to get started with Racket.

[0]: https://docs.racket-lang.org/web-server/stateless.html#%28pa...

[1]: https://docs.racket-lang.org/db/

[2]: https://stackoverflow.com/questions/294802/use-a-database-wi...

[3]: https://adambard.com/blog/clojure-sql-libs-compared/

reply

2 points by i4cu 30 days ago | link

> If that is something you have published, it'd be fun to see, whether it's finished or not.

I actually tried to look it out the other day during this conv, but it's buried somewhere unavailable right now. If I find/get to it I'll post.

> That multitude of libraries with similar purpose may be useful in some cases, sure, but also potentially a bit overwhelming for beginners, so I guess that's why I found it easier to get started with Racket.

Agreed. Navigating the volume libraries and the options available is a real pain in the beginning, but once you get past that, then it's not bad at all. At the same time, take a look at the quality of Clojure's Redis Carmine Library vs. Racket's Redis Libraries. Miles apart.

To each their own, right :)

reply

3 points by hjek 22 days ago | link

> take a look at the quality of Clojure's Redis Carmine Library vs. Racket's Redis Libraries.

Good point.

Anyways, I'm going with SQLite. It's really fast[0] and I just found out about recursive selects[1].

[0]: https://www.sqlite.org/fasterthanfs.html

[1]: https://sqlite.org/lang_with.html

reply

2 points by akkartik 38 days ago | link | parent | on: Knark - rewrite in plain Racket?

Can you elaborate on what extra features Clojure's tables provide?

reply

4 points by i4cu 38 days ago | link

Where to start...

1. The ability to read them. Especially on output. It may seem small, but when you have a larger deeply nested map it really matters.

  {:user "akkartik"}
2. Keywords are ifns that work on hash-maps.

  (map :user list-of-maps)
3. All the built in library fns that work on them. Basics + Assoc-in, Get-in, merge, deep-merge, contains?..

Here's a good function to separate a collection of maps by any fn/ifn.

  (def separate (juxt filter remove))
 
  (separate :user data)

4. Iteration just works, ie they automatically become association lists for pretty much every composing fn (reduce, loop, ect)

5. Destructuring bind. Eg:

  (defn myfn [{:keys [user] :as record}]
    (str user " rocks"))
6. Nil values are supported.

It goes on, but I think that's enough... Besides my thumbs are getting sore typing via phone.

reply

3 points by akkartik 38 days ago | link

You'll need to elaborate more, apologies in advance to your thumbs. Some of the features you describe I don't understand (ability to read, maybe others). Others seem to already be in Arc (library support, iteration, destructuring bind; [_ 'user] doesn't seem much worse than :user). Nil values feels like the only definitely missing feature, and it feels more like a fork in the road with different trade-offs rather than one alternative being always superior to the other.

reply

7 points by i4cu 37 days ago | link

Ok so let's use this data

Clojure:

  > (def players
       {:joe {:class "Ranger" :score 100}
        :jane {:class "Knight" :weapon "Greatsword" :score 140}
        :ryan {:class "Wizard" :weapon "Mystic Staff" :score 150}})
response:

{:joe {:class "Ranger" :score 100} :jane {:class "Knight" :weapon "Greatsword" :score 140} :ryan {:class "Wizard" :weapon "Mystic Staff" :score 150}})

Arc:

  (= players
    (obj 'joe (obj 'class "Ranger" 'score 100)
         'jane  (obj 'class "Knight" 'weapon "Greatsword" 'score 140)
         'ryan  (obj 'class "Wizard" 'weapon "Mystic Staff" 'score 150))) 
response:

#hash(((quote jane) . #hash(((quote weapon) . "Greatsword") ((quote class) . "Knight") ((quote score) . 140))) ((quote ryan) . #hash(((quote weapon) . "Mystic Staff") ((quote class) . "Wizard") ((quote score) . 150))) ((quote joe) . #hash(((quote class) . "Ranger") ((quote score) . 100))))

-- Readability --

Reading that output is beyond a headache. It should be against the law! But more importantly, when writing code/troubleshooting you can't copy the evaluated output make a modification and then call the function again. I know you can bind it and apply functions to modify it, but often that's a real hassle compared to copy/paste.

-- Destructuring bind --

See links [a,b]

I think simple cases don't show the difference.

  > (def book {:name "SICP" :details {:pages 657 :isbn-10 "0262011530"}})
  
  > (let [{name :name {pages :pages isbn-10 :isbn-10} :details} book]
      (println "name:" name "pages:" pages "isbn-10:" isbn-10))
	  
response:

    name: SICP pages: 657 isbn-10: 0262011530
	
-- Library functions --

Clojure has DOZEN's of helper functions (and important traversal functions) that don't exist in Arc. Yeah I could write them, but I still have to. I was highlighting the ease with 'separate', but here's even 'deep-merge'....

  (defn deep-merge
    "Recursively merges maps. If keys are not maps, the last value wins."
    [& vals]
     (if (every? map? vals)
         (apply merge-with deep-merge vals)
         (last vals)))
		 
  > (deep-merge players {:joe {:weapon "LongBow"} :jane {:weapon "Lance"}})
  
It's not even these sample cases that expose the issues. It's about working with REAL data. I have many files containing hash-maps that are 15 levels deep. If I were to attempt pretty much anything, but the most simple cases, in Arc I would hit problem after problem after problem. Which I did. So fine... really I should use a DB, right? ok let me just grab that DB library in Arc....

-- Nil values --

I'd even be fine with any falsey values like boolean false. But Arc will not even store an #f value. To suggest I can't pass in REAL data with falsey values is really limiting. I can't pass in an options table argument. For every operation I would have to add more data or nuanced data or nuanced operations to support "this value was set, but it's set to 'NO' I don't want that option.".

[a] https://clojure.org/guides/destructuring#_associative_destru...

[b] https://clojure.org/guides/destructuring#_keyword_arguments

reply

3 points by akkartik 37 days ago | link

Thanks! That's very helpful.

reply

4 points by akkartik 50 days ago | link | parent | on: Tell Arc: Arc 3.2

Thanks so much, Scott!

reply

More