Arc Forumnew | comments | leaders | submitlogin
Atdefs - mutual atomicity
3 points by lg 5211 days ago | 11 comments
I rarely-but-sometimes want functions that are mutually atomic - only one of them can be running in one thread at any given time. Don't read that too carefully, you know what I mean :)

Typically I'd use a mutex and each of the functions would lock on it before doing stuff. But arc doesn't have mutexes, it has 'atomic and 'atomic-invoke, and that's sort of inconvenient. Consider queues:

  (def enq (obj q)
    (atomic ...))

  (def deq (q)
    (atomic ...))
If I'm reading this right, only one thread can be enqueueing (anything) at a time, and only one thread can be dequeueing (anything) at a time, yet two threads can be enqueueing and dequeueing the same queue at the same time, and step all over each other? That can't be right. (note that pg mostly uses enq-limit to mess with queues, which sidesteps this by wrapping everything in 'atomic. In the one case where he doesn't, on req-times* in the server, only one thread ever touches the queue).

So, playing off the 'defs macro, I wrote 'atdefs, that translates this:

  (let glob* 0
     (atdefs foo (bar) (++ glob* bar)
             baz (x)   (-- glob* x)))
into this (well, with gensyms instead of helpful names):

  (let glob* 0
     (withs (f1 (fn (bar) (++ glob* bar))
             f2 (fn (x)   (-- glob* x))
             gh (fn (call . args)
                    (atomic
                       (case call
                             foo (apply f1 args)
                             bar (apply f2 args)))))
        (def foo args
             (apply gh 'foo args))
        (def bar args
             (apply gh 'bar args))))
I think that's the right way to do it. I would've put g1/g2 right into the definition of gh, but would they be compiled procedures then? I'm not really sure.

Here's the macro, feel free to improve it. I admit that I preserved the order of the args to 'case out of pure superstition.

  (mac atdefs args
     (let (gh ga)
       (apply (rfn helper (acc al . rest)
		 (if rest
		     (let (name args body . fns) rest
			  (w/uniq g
		             (apply helper 
			         `((,g (fn ,args ,body)) ,@acc)
			         `((,g ,name) ,@al)
				 fns)))
		     (w/uniq (gh gc ga)
			 `(,gh ,(apply + (rev:cons
				           `(,gh (fn (,gc . ,ga)
					             (atomic
							(case ,gc
							      ,@(apply + (map (fn ((a b))
										 `(,b (apply ,a ,ga)))
									     (rev al)))))))
					     acc))))))
	      nil nil args)
       (w/uniq gl
	   `(withs ,ga
		   ,@(map (fn ((name args body))
			      `(def ,name ,gl
				    (apply ,gh ',name ,gl)))
			  (tuples args 3))))))


3 points by akkartik 5210 days ago | link

"If I'm reading this right, only one thread can be enqueueing (anything) at a time, and only one thread can be dequeueing (anything) at a time, yet two threads can be enqueueing and dequeueing the same queue at the same time, and step all over each other? That can't be right."

I wondered at this as well a few months ago, but looking at ac.scm, atomic-invoke basically ensures that there's only ever one thread inside any atomic block. This is the equivalent of python's GIL, perhaps.

That's why enq and deq are correct.

-----

1 point by lg 5210 days ago | link

Ha, thanks, good to know. In that case, I'll save this for the day when arc gets a new threading model!

-----

1 point by conanite 5210 days ago | link

They're correct, but inefficient - it would be preferable to have a lock per queue.

-----

1 point by akkartik 5210 days ago | link

Fo' shizzle. The timing of this item is great, because I've been struggling with performance in my arc-based webapp. I have one thread bringing new items into the index, and it's constantly thrashing with UI threads causing unacceptable latencies.

-----

1 point by aw 5210 days ago | link

MzScheme only uses one CPU, so using atomic doesn't affect performance.

Unless you're wrapping an I/O operation in atomic -- don't do that :-)

-----

1 point by akkartik 5210 days ago | link

I don't use atomic myself. But I find that enabling the back-end thread slows down the UI threads. I've been struggling to create a simple program to illustrate this.

In fact, wrapping parts of my UI threads in atomic actually makes them more responsive[1]. I think it's because instead of getting 200 thread switches with their associated overheads, a single thread runs to 'completion'.

[1] You're right that throughput may go down, though I try to eliminate IO in atomic blocks. But latency is more of a concern to me than not wasting CPU. It's not the end of the world if new data takes a few seconds longer to make it to the index, but a few seconds in the front-end can make or break the user experience.

-----

1 point by aw 5210 days ago | link

Wow, that's interesting!

"vector-set-performance-stats!" in http://docs.plt-scheme.org/reference/runtime.html returns the number of thread context switches that have been performed, so you might try calling that at the beginning and at the end of an UI operation so you can see how many thread switches happened in between.

-----

1 point by akkartik 5209 days ago | link

Thanks for that tip! With its help I was able to figure out that my background thread was just doing a lot more work than I thought.

-----

1 point by zck 5210 days ago | link

I haven't played around with atomicity, but wouldn't it be cool if this worked?

  (atomic (def enq (obj q) ...)
      (def deq (q) ...))
Edit: if I'm testing properly, this doesn't work:

  arc> (atomic (def wait () (sleep 5) (prn "done with wait")) (def stuff () (prn "stuff!")))
  #<procedure: stuff>
  arc> (thread (wait))
  #<thread>
  arc> (stuff)
  stuff!
  "stuff!"
  arc> done with wait

-----

1 point by lg 5210 days ago | link

I thought about this too, but it doesn't look right to me. I would expect this makes the act of defining the functions atomic, but not calls to those functions once defined.

-----

1 point by zck 5210 days ago | link

To me, it feels the same as

  (let ctr 0
       (def inc () (++ ctr))
       (def reset () (= ctr 0))

-----