aserio changed the topic of #ste||ar to: STE||AR: Systems Technology, Emergent Parallelism, and Algorithm Research | stellar.cct.lsu.edu | HPX: A cure for performance impaired parallel applications | github.com/STEllAR-GROUP/hpx | Buildbot: http://rostam.cct.lsu.edu/ | Log: http://irclog.cct.lsu.edu/
EverYoung has quit [Remote host closed the connection]
<github>
[hpx] hkaiser force-pushed fixing_3272 from bdc41a0 to e384edd: https://git.io/vxyik
<github>
hpx/fixing_3272 e384edd Hartmut Kaiser: Compiling more tests exclusively
EverYoung has joined #ste||ar
<github>
[hpx] hkaiser opened pull request #3273: Splitting tests to avoid compiler OOM (master...fixing_3272) https://git.io/vxSrv
EverYoung has quit [Remote host closed the connection]
EverYoung has joined #ste||ar
diehlpk has joined #ste||ar
EverYoung has quit [Ping timeout: 265 seconds]
diehlpk has quit [Remote host closed the connection]
jaafar has quit [Remote host closed the connection]
<jbjnr_>
simbergm: I rebased the lazy_thread_init and remove_wait_or_add_new
<jbjnr_>
onto master and force pushed them, but I think I screwed up. There are two commits on one of my other branches that might have been on one of those branches. "Fix reinit_counters test" and "Fix rand usage in reinit_counters" - I do not know which branch those two commits belong to
<jbjnr_>
both are authored by you
<jbjnr_>
any idea?
<simbergm>
ah, they belong to remove_wait_or_add_new
<jbjnr_>
ah great. thanks
<jbjnr_>
I'll put them back
<simbergm>
they could go onto master as well, but started failing on that branch
<jbjnr_>
if they belong on a differnt branch, then lets put them there ...
<simbergm>
was a bit lazy...
<simbergm>
yes
<simbergm>
but you will need them on the remove_wait_or_add_new branch eventually
<jbjnr_>
can I leave it to you then?
<jbjnr_>
aha
<simbergm>
yeah, I can do that
<simbergm>
is github slow for anyone else?
<simbergm>
jbjnr_: I updated the future_overhead test yesterday as well, will push that today
<simbergm>
in some cases up to 20% difference
<simbergm>
and the remove_wait_or_add_new is a lot slower...
<simbergm>
deque is slow
<simbergm>
remove_wait_or_add_new with queue is still slower than master, but a lot faster than deque
<github>
hpx/master ed3b67b Mikael Simberg: Fix another unbounded rand in the reinit_counters test
<simbergm>
aww, didn't mean to push to master...
<github>
[hpx] msimberg created msimberg-patch-1 (+1 new commit): https://git.io/vxSbn
<github>
hpx/msimberg-patch-1 2ea1c48 Mikael Simberg: Reinit counters synchronously in reinit_counters test...
<jbjnr_>
20% difference in what - I didn't follow you
<jbjnr_>
where is heller these days? moved on to work with raja or legion maybe?
<simbergm>
sorry, I added another test to that which just looks at the number of threads running and does not call wait_all at all
<simbergm>
doing that is sometimes 20% faster than doing wait_all
<simbergm>
so that's more or less the time to actually run the tasks, and then there's a bit of overhead for dealing with futures
<simbergm>
and sometimes means means depends on the number of threads
<jbjnr_>
aha. you are talking about the futures overhead test yes?
<simbergm>
yes
<jbjnr_>
did you use my version?
<simbergm>
no, sorry, I rewrote it without semaphores and without wait_all
<jbjnr_>
ok
<simbergm>
but the important part was that with no staged tasks it's a lot slower, probably higher contention on the queues?
<jbjnr_>
that is very bad
<jbjnr_>
it means we have to rewrite it again
<jbjnr_>
but ...
<jbjnr_>
those future_overhead tests are not really realistic in the real world
<simbergm>
yes, if you see a speedup for cholesky it might not matter
<jbjnr_>
they hammer the queues constantly, but in practice, real tasks won't behave like that
<simbergm>
yep, it's good to keep in mind though because it will be a problem again if one wants to use smaller tasks
<github>
[hpx] msimberg opened pull request #3274: Reinit counters synchronously in reinit_counters test (master...msimberg-patch-1) https://git.io/vxSb7
<jbjnr_>
^^that branch is independent of the wait or add new stuff etc is it?
<simbergm>
yeah, now it is
<jbjnr_>
does your futures test give similar results to mine?
david_pfander has joined #ste||ar
<simbergm>
didn't try yet, will check
<simbergm>
did you see a big difference?
<jbjnr_>
yes
<jbjnr_>
but my test is not 100% reliable
<jbjnr_>
it can end before all tasks have completed. would need to add an atomic check to be 100% certain
<jbjnr_>
I need to double check it
<simbergm>
hrm, do you have the gist to your version around still?
<github>
[hpx] biddisco pushed 2 new commits to remove_wait_or_add_new: https://git.io/vxSNV
<github>
hpx/remove_wait_or_add_new 1761016 Mikael Simberg: Fix rand usage in reinit_counters
<github>
hpx/remove_wait_or_add_new e172a92 Mikael Simberg: Fix reinit_counters test...
<jbjnr_>
oops.
<jbjnr_>
pushed the wrong branch
<jbjnr_>
nevermind
<jbjnr_>
simbergm: try this ^^^^
<jbjnr_>
wait ...
<github>
[hpx] biddisco created futures_overhead (+1 new commit): https://git.io/vxSNX
<github>
hpx/futures_overhead fb2f8b3 John Biddiscombe: improve future overhead test to not wait on many futures
<jbjnr_>
there she blows
<jbjnr_>
try it and let me know what you find
<jbjnr_>
might have some cruft in there because I was playing around to use different schedulers etc
<simbergm>
jbjnr_: weee, thanks
<heller>
simbergm: jbjnr_: regarding the thread stuff, I'll return to office next week
<heller>
We should plan for a call or so to coordinate the efforts and synchronize our ideas
<simbergm>
heller: yay, good to have you back
<jbjnr_>
heller: you on vacation or just work travel?
<simbergm>
jbjnr_: future_overhead test with 18 threads and 1000000 tasks
<simbergm>
original version: 0.83-1.05 s (big variance)
<simbergm>
semaphore: 0.88-0.90s
<simbergm>
thread counts: 0.75-0.84
<simbergm>
20 repetitions each
<simbergm>
on master
mcopik has joined #ste||ar
<jbjnr_>
simbergm: looks good. thread couns must be your version. Feel free to push it and I'll play with it later.
<jbjnr_>
Popping out for a few minutes, back in a bit
<jbjnr_>
I hate being ill.
<heller>
jbjnr_: vacation
<simbergm>
jbjnr_: last data point, no wait_each(scratcher, ...) but just wait_all
<simbergm>
0.75-0.95 s, again huge variance
<simbergm>
heller: somewhere nice? :)
<heller>
At home ;)
<heller>
So yes
<simbergm>
sounds good :D
<simbergm>
hawaii sounds horrible anyway (right jbjnr_)
<heller>
hkaiser: next step will be a better reuse of the stacks
<heller>
And eliminating some more allocations
<hkaiser>
heller: let's do this first, I'm not convinced removing the staged threads is the solution
nikunj has joined #ste||ar
Anushi1998 has quit [Ping timeout: 245 seconds]
<heller>
hkaiser: it's part of the solution I think
<hkaiser>
no objections, I'd like to see proof however ;)
aserio has quit [Ping timeout: 245 seconds]
aserio has joined #ste||ar
victor_ludorum has quit [Ping timeout: 260 seconds]
hkaiser_ has joined #ste||ar
hkaiser has quit [Ping timeout: 276 seconds]
<github>
[hpx] msimberg created msimberg-patch-2 (+1 new commit): https://git.io/vx9DV
<github>
hpx/msimberg-patch-2 b5e10b2 Mikael Simberg: Make CircleCI install step only depend on examples
<github>
[hpx] msimberg opened pull request #3275: WIP: Make CircleCI install step only depend on examples (master...msimberg-patch-2) https://git.io/vx9Db
quaz0r has quit [Quit: WeeChat 2.2-dev]
EverYoung has joined #ste||ar
EverYoung has quit [Remote host closed the connection]
EverYoung has joined #ste||ar
mbremer has quit [Quit: Page closed]
david_pfander has quit [Ping timeout: 264 seconds]
aserio has quit [Quit: aserio]
katywilliams has joined #ste||ar
katywilliams has quit [Client Quit]
eschnett has quit [Quit: eschnett]
hkaiser_ has quit [Quit: bye]
hkaiser has joined #ste||ar
<jbjnr_>
hkaiser: yt?
<hkaiser>
jbjnr_: hey
<jbjnr_>
I'd like to start writing about variadci executors ...
<jbjnr_>
but ...
<hkaiser>
but what?
<jbjnr_>
I need to bring up the subject of thread pools and schedulers, as they are relevant to what I'm doing
<jbjnr_>
have any proposals been put forward to discuss the intereaction between schedulers and executors?
<jbjnr_>
anything about schedulkers at all in fact
<jbjnr_>
or thread pools even
<hkaiser>
execution contexts are currently being discussed, so no
<hkaiser>
well, the executors proposal has some thread-pool in it
<jbjnr_>
do you know why document to read for current thinking on execution contexts
<hkaiser>
jbjnr_: let me get back to you on this, need to look around
<K-ballo>
that's the one I had in mind, the former
<jbjnr_>
thanks. I've seen 0443 and 0761, but not 0737. I will read up on them again
<hkaiser>
jbjnr_: have you seen the link for p1017?
diehlpk_work_ has quit [Quit: Leaving]
<jbjnr_>
hkaiser: yes 1017 - I wanted to start writing. I presented my stuff at SOS the other day and concluded my talk by saying that we needed variadic executors and would submit a paper to ISO, so the timing is right.
<hkaiser>
jbjnr_: perfect
mcopik has joined #ste||ar
diehlpk has joined #ste||ar
mcopik has quit [Quit: Leaving]
diehlpk has quit [Ping timeout: 264 seconds]
diehlpk has joined #ste||ar
diehlpk has quit [Ping timeout: 264 seconds]
EverYoung has quit [Remote host closed the connection]