hkaiser changed the topic of #ste||ar to: STE||AR: Systems Technology, Emergent Parallelism, and Algorithm Research | stellar-group.org | HPX: A cure for performance impaired parallel applications | github.com/STEllAR-GROUP/hpx | This channel is logged: irclog.cct.lsu.edu
FunMiles has quit [Remote host closed the connection]
FunMiles has joined #ste||ar
FunMiles has quit [Ping timeout: 260 seconds]
<hkaiser>
gonidelis[m]: yt?
<gonidelis[m]>
yes
<hkaiser>
the counting test we copied does not work for your executor
<hkaiser>
the other test has the counter increment in the async_execute of the test executor
<gonidelis[m]>
you mean it's not appropriate
<gonidelis[m]>
?
<gonidelis[m]>
ahh ok
<gonidelis[m]>
so i need to port that incremeant to my test executor too?
<ms[m]>
gonidelis: what do you understand by "sustains"?
<gonidelis[m]>
does not destruct the thread
<gonidelis[m]>
hm...
<gonidelis[m]>
maybe aggregated keeps the given thread for as long as possible, instead of keeping a pool of threads that go idle and active again and again
<gonidelis[m]>
right?
<gonidelis[m]>
but once it kills it, it never comes back (the thread0
<gonidelis[m]>
)
<gonidelis[m]>
s/thread0/thread)/
<ms[m]>
something like that, yeah
<gonidelis[m]>
thanks
<gonidelis[m]>
do we have other executors that sustain the pool of threads/
<gonidelis[m]>
?
<ms[m]>
nope
<gonidelis[m]>
ok. i will go with fork join then
<gonidelis[m]>
thanks
<ms[m]>
what are you testing?
<gonidelis[m]>
i am working on taskbench
<gonidelis[m]>
and i spawn tasks with an hpx::for_loop
<ms[m]>
nice, do you already have some results (not right now with the fork_join_executor, but maybe on non-for-loopy tests)?
<gonidelis[m]>
yes
<ms[m]>
and, how does it look?
<gonidelis[m]>
promising
<gonidelis[m]>
we straightforwardly compare hpx with openmp
<gonidelis[m]>
what i see until now is that hpx behaves better with big workload
<gonidelis[m]>
while in large granularity situations I need to take care of the dependencies more properly in order to see better performance (workiing on that rn)
<gonidelis[m]>
my implementation is still very primitive. just imagine that i asynchronously execute a group of tasks on each timestep in a blocking way
<gonidelis[m]>
the goal is to remove the barrier between the timesteps and be able to assign futures on tasks of succeeding timesteps
<gonidelis[m]>
for reference ^^: each row is one timestep and each collumn represents the task throughout the time
<ms[m]>
sounds good
<ms[m]>
just bear in mind with the fork_join_executor that it doesn't play well with other, unrelated work, since it (almost) blocks all worker threads until it goes out of scope
<gonidelis[m]>
unrelated work? executing and assigning tasks is my only work here
<ms[m]>
another one you can now try on master is scheduler_executor{thread_pool_scheduler{}}, which is a bit more friendly with other tasks, but still spawns far fewer hpx threads
<gonidelis[m]>
nothing else is happening besides my for_loop
<gonidelis[m]>
oh wait
<gonidelis[m]>
ahhhhh......
<ms[m]>
it's the only work in the benchmark, and that's fair, I'm just setting expectations in case you ever want to use that while other work is executing (or suggest that to someone else)
<gonidelis[m]>
yeah yeah
<gonidelis[m]>
thank you for that remark. actually i will need to double check. let me try the scheduler thread pool one
diehlpk_work has joined #ste||ar
FunMiles has quit [Remote host closed the connection]
FunMiles has joined #ste||ar
FunMiles has quit [Ping timeout: 265 seconds]
<hkaiser>
ms[m]: wrt operator new(): you're right, I somehow messed it up in my mind...
hkaiser has quit [Quit: Bye!]
hkaiser has joined #ste||ar
FunMiles has joined #ste||ar
<diehlpk_work>
K-ballo, Hi
<diehlpk_work>
Are you still interested in contributing to the HPX paper?
<gonidelis[m]>
hkaiser: pm
diehlpk_work has quit [Ping timeout: 264 seconds]
FunMiles has quit [Remote host closed the connection]
FunMiles has joined #ste||ar
diehlpk_work has joined #ste||ar
FunMiles has quit [Ping timeout: 268 seconds]
FunMiles has joined #ste||ar
FunMiles has quit [Remote host closed the connection]