Arc Forumnew | comments | leaders | submitlogin
Threads
3 points by highCs 3064 days ago | 18 comments
Hi,

From racket doc:

Threads run concurrently in the sense that one thread can preempt another without its cooperation, but threads do not run in parallel in the sense of using multiple hardware processors. See Parallelism for information on parallelism in Racket.

So arc http server runs in one OS thread? If not, any chance one can specify the number of OS threads in the pool? Other considerations?

Thanks



2 points by akkartik 3064 days ago | link

My understanding is that Racket, like most other high-level languages, has traditionally[1] provided support for concurrency but not parallelism. You still benefit from atomic, though, because otherwise a thread can be stopped at any time, and some other thread restarted in its place. You can have race conditions on a single OS thread.

[1] There's new support for parallelism: http://docs.racket-lang.org/reference/futures.html. Arc hasn't used it yet, though.

-----

1 point by highCs 3064 days ago | link

I'm under the impression that when I create a future it doesn't run until I call touch - this behavior denies parallelism if I'm correct. Is it supposed to do that, no I guess? Any example that works?

  (= f ($.future (fn () (for i 0 (< i 100) (++ i) (prn i)))))
  ($.touch f)
  0
  1
  2
  3
  4
  5
  6
  ...

-----

3 points by rocketnia 3060 days ago | link

Going by http://docs.racket-lang.org/guide/parallelism.html#%28part._..., it looks like any operation in a future that might be expensive is a "blocking operation," which can only be resumed by touching the future. Even multiplying a floating-point number by fixed-point integer is expensive enough to be blocking!

Without testing it myself, I'd guess there are a few things that might be blocking in your example:

* Converting a number to a string.

* Looking up the current value of stdout. This depends on the current parameterization, which is probably carried on the continuation in the form of continuation marks. According to http://docs.racket-lang.org/reference/futures.html, "work in a future is suspended if it depends in some way on the current continuation, such as raising an exception."

* Actually writing to the output stream.

Maybe "visualize-futures" would show you what's going on in particular.

-----

3 points by rocketnia 3058 days ago | link

I finally sat down to test it, and it looks like all three of those are blocking operations, just as I thought.

  arc> (= g 1)
  1
  arc> ($.future:fn () (= g 2))
  #<future>
  arc> g
  2
As a baseline, that future seems to work. It simply assigns a variable, which the documentation explicitly says is a supported operation, so there wasn't a lot that could go wrong.

  arc> (= f ($.future:fn () (= g $.number->string.3)))
  #<future>
  arc> g
  2
  arc> $.touch.f
  "3"
  arc> g
  "3"
That future had to call Racket's number->string, and it blocked until it was touched. The same thing happens with Arc's (= g string.3).

  arc> (= f ($.future:fn () (= g ($.current-output-port))))
  #<future>
  arc> g
  "3"
  arc> $.touch.f
  #<output-port:stdout>
  arc> g
  #<output-port:stdout>
That future blocked due to calling Racket's current-output-port. The same thing happens with Arc's (= g (stdout)).

  arc> (= sout (stdout))
  #<output-port:stdout>
  arc> (= f ($.future:fn () ($.display #\! sout) (= g 5)))
  #<future>
  arc> g
  #<output-port:stdout>
  arc> $.touch.f
  !5
  arc> g
  5
That future blocked on calling Racket's display operation. It finally output the #\! character when it was touched. The same thing happens with Arc's (disp #\! out), and the same thing happens if I pass in a string instead of a single character.

I tried using visualize-futures from Arc, but I ran across some errors. Here's the first one:

  arc> ($:require future-visualizer)
  #<void>
  arc>
    (def visualize-futures-fn (body)
      (($:lambda (body) (visualize-futures (body))) body))
  #<procedure: visualize-futures-fn>
  arc>
    (mac visualize-futures body
      `(visualize-futures-fn:fn () ,@body))
  #(tagged mac #<procedure: visualize-futures>)
  arc> (def wrn (x) write.x (prn))
  #<procedure: wrn>
  arc>
    (visualize-futures:withs
        (g 5
         f ($.future:fn () (= g $.number->string.6)))
      wrn.g
      (wrn $.number->string.6)
      wrn.g
      (wrn $.touch.f)
      wrn.g)
  5
  "6"
  5
  "6"
  "6"
  inexact->exact: no exact representation
    number: +nan.0
    context...:
     C:\Program Files\Racket\share\pkgs\future-visualizer\future-visualizer\private\visualizer-drawing.rkt:344:4: for-loop
     C:\Program Files\Racket\share\pkgs\future-visualizer\future-visualizer\private\visualizer-drawing.rkt:387:0: calc-segments
     C:\Program Files\Racket\share\pkgs\future-visualizer\future-visualizer\private\visualizer-gui.rkt:106:0: show-visualizer3
     C:\mine\prog\repo\anarki\ac.scm:1234:4
It seems to be dividing by zero there. I tried it in Racket, but I got the same error. This error can be fixed by tacking on (sleep 0.1) so that the total duration isn't close to zero:

    (visualize-futures:withs
        (g 5
         f ($.future:fn () (= g $.number->string.6)))
      (sleep 0.1)
      wrn.g
      (wrn $.number->string.6)
      wrn.g
      (wrn $.touch.f)
      wrn.g)
However, even that code gives me trouble in Anarki; the window that Racket pops up is unresponsive for some reason. So here's the same test in Racket, where the window actually works:

  Welcome to Racket v6.1.1.
  > (require future-visualizer)
  > (define (wrn x) (write x) (display "\n"))
  >
    (visualize-futures
      (let* ([g 5]
             [f (future (lambda () (set! g (number->string 6))))])
        (sleep 0.1)
        (wrn g)
        (wrn (number->string 6))
        (wrn g)
        (wrn (touch f))
        (wrn g)))
  5
  "6"
  5
  #<void>
  "6"
  >
In the pop-up, the panel at the left shows a summary of expensive operations:

  Blocks (1)
    number->string (1)
  Syncs (0)
  GC's (0 total, 0.0 ms)
If I look in the timeline and select the two red dots, this information comes up:

  Event: block
  Time: +0.0 ms
  Future ID: 1
  Process ID: 1
  Primitive: number->string

  Event: block
  Time: +109.744140625 ms
  Future ID: 1
  Process ID: 0
  Primitive: number->string
It looks like the first one is the number->string call inside the future, and the second one is the call that occurs outside the future. I guess it's still considered a blocking operation even if it happens in the main process, but fortunately it doesn't stop the whole program. :)

So number->string is a primitive that's considered complicated enough to put the future in a blocked state. To speculate, maybe the Racket project doesn't want to incur the performance cost of having the future's process load the code for every single Racket primitive, or maybe they just haven't implemented this one operation yet.

Going by this, futures can be useful, but they have a pretty limited set of operations. Still, mutation is pretty powerful: If needed, maybe it's possible to set up an execution harness where the future assigns the operation it wants to perform to a variable, and then some monitoring thread takes care of it, assigning the result back to a variable that the future can read.

Meanwhile, I wonder why the pop-up doesn't seem to work from Anarki. I seem to remember other Racket GUI operations haven't worked for me either. If the GUI works for anyone else, it might be that I'm on Windows.

-----

3 points by highCs 3058 days ago | link

Oh I get it I think. Futures are for computing arithmetic in parallel.

-----

1 point by highCs 3064 days ago | link

Thinking about it, spawning a bunch of worker arc processes should not be very difficult.

-----

2 points by highCs 3063 days ago | link

Tried using worker processes, the single arc http server still is a major contention point. As I understand now, the solution for good http server performance would be to sit a bunch of arc servers behind a reverse proxy like nginx. That explains why the arc http server isn't more complicated, it doesn't need to.

-----

2 points by akkartik 3063 days ago | link

I'd be curious to see your experiment. How did you measure contention?

I'm not aware of any arc servers in the wild using multiple servers. This isn't for performance reasons but just correctness; we don't have a database that can keep concurrent writes from stepping on each other.

-----

2 points by highCs 3063 days ago | link

> How did you measure contention?

Very empirically. Basically, on localhost, I've a client sending a shit load of basic requests (one thread for each request) and a server printing "hello world" in the repl for each of them in a minimalistic defop. The server end up printing on the repl far after the client has ended sending requests. The absolute number of requests we are talking here is in the order of 200; server takes 5 secondes (give or take) to print the last "hello world" - on a 2.5Ghz recent CPU. I have to make more tests to see if the client may be responsible for a share of that latency. I don't see the OS and repl as bottlenecks (printed in the past in repl at tremendous rates). I may be completely wrong as I'm beginner in networking.

> I'm not aware of any arc servers in the wild using multiple servers. This isn't for performance reasons but just correctness; we don't have a database that can keep concurrent writes from stepping on each other.

I've replaced the diskvars files by sqlite entries. Sqlite is a competitor to fopen as it's advertised on the website making it a perfect choice for diskvars in my opinion (again, ultimately beginner on that matter).

-----

2 points by akkartik 3062 days ago | link

Your result does seem off. Using apachebench, I'm able to send 200 requests to a minimal arc server in 0.75s.

Here's my commandline:

  $ ab -n 200 http://localhost:8080/

-----

2 points by highCs 3061 days ago | link

Ran the same test, 200 requests for 2 seconds (my test machine is a laptop). The weird thing is that the percentiles are multiple of 16...

  Benchmarking localhost (be patient)
  Completed 100 requests
  Completed 200 requests
  Finished 200 requests

  Server Software:
  Server Hostname:        localhost
  Server Port:            80

  Document Path:          /
  Document Length:        11 bytes

  Concurrency Level:      1
  Time taken for tests:   2.456 seconds
  Complete requests:      200
  Failed requests:        0
  Total transferred:      17800 bytes
  HTML transferred:       2200 bytes
  Requests per second:    81.44 [#/sec] (mean)
  Time per request:       12.279 [ms] (mean)
  Time per request:       12.279 [ms] (mean, across all     concurrent requests)
  Transfer rate:          7.08 [Kbytes/sec] received

  Connection Times (ms)
              min  mean[+/-sd] median   max
  Connect:        0    0   1.9      0      16
  Processing:     0   12   9.1     16      31
  Waiting:        0    3   6.3      0      16
  Total:          0   12   9.3     16      31

  Percentage of the requests served within a certain time (ms)
  50%     16
  66%     16
  75%     16
  80%     16
  90%     16
  95%     31
  98%     31
  99%     31
 100%     31 (longest request)

-----

2 points by highCs 3062 days ago | link

Alright, it's off by almost 10x then my test sux. The client code must be wrong. Thinking about it quickly, it's probably not so easy to send a lot of requests in parallel... I'll use ab in the future. Thank a lot for having checked that.

-----

2 points by akkartik 3063 days ago | link

That sounds amazing! I'd never heard that about sqlite. Patches would be most welcome. If you tell me your github username I'll give you commit rights to anarki.

-----

2 points by highCs 3062 days ago | link

> That sounds amazing! I'd never heard that about sqlite.

Cool happy that sounds like interesting stuff.

> If you tell me your github username

Please find my github profile on my arc forum profile.

> I'll give you commit rights to anarki

I would be happy to contribute. Well, I have to, that's a duty; I'm given such an amazing language in the first place.

So I have these sqlite obj. I've coded worker processes which you can spawn and kill (they use the db to register, take jobs and kill themselves). You can give them any job by supplying a list. They return the result (using the db again).

I'm planning to write a cluster.arc, which manage a bunch of http servers which you can spawn and kill the same way as the worker processes (well you could do that using the worker processes themselves, I dont remember why I didnt retain that idea though, will remember...). Easy to use behind a reverse proxy, that's the goal.

I have a with-lock macro, which takes an id in argument; its like atomic but associated to an id (use the db again, so that it works inter-processes)(there is something similar in racket using files).

I have let1, alet1, when1 and aor. Not sure those macro are all relevant.

I'll be happy to contribute with the relevant part. Give me a few weeks though. Still have to tests most of these things and I'll need time to extract them from the pet-project.

-----

2 points by akkartik 3062 days ago | link

Absolutely; take your time.

-----

2 points by highCs 3058 days ago | link

Since sqlite is daemon-less, it doesnt work well when multiple processes try to access the same database; one starts receiving database locked errors. I'll see if I can find a daemon-less database engine which handles that. I want to run multiple arc http servers accessing a single database behind a load-balancer and I want a client app to have parallelism using worker processes, which would use the same kind of database engine. So I'm looking for a daemon-less database engine that multiple processes can access concurrently without me having to implement anything to support that and which works under windows, linux and OSX. Anyone knows one? I'm looking at Sophia right now [1]

[1] http://sphia.org/

-----

2 points by highCs 3055 days ago | link

Sophia doesn't allow multiple processes to access the same databse.

-----

2 points by highCs 3055 days ago | link

What I could do however, is have a master process starts an http server with a daemon-less database. At this point, one can read and write the database at any time via http requests. Then I can make the worker processes thing works. Then Arc has parallelism.

-----