hkaiser changed the topic of #ste||ar to: STE||AR: Systems Technology, Emergent Parallelism, and Algorithm Research | stellar-group.org | HPX: A cure for performance impaired parallel applications | github.com/STEllAR-GROUP/hpx | This channel is logged: irclog.cct.lsu.edu
Yorlik has quit [Ping timeout: 250 seconds]
K-ballo has quit [Ping timeout: 240 seconds]
K-ballo has joined #ste||ar
<gonidelis[m]>
hkaiser: did you happen to check the docs html that i sent you the previous week?
<hkaiser>
gonidelis[m]: yeah, let's go with it for now, it's still a RC
<hkaiser>
I will do it for the final release
<gonidelis[m]>
hkaiser: going over the work stealing scheduler
<gonidelis[m]>
is this intended for 1.8?
<hkaiser>
gonidelis[m]: no
<gonidelis[m]>
ok
<hkaiser>
I don't think this will be ready soon
<gonidelis[m]>
ok and about the performance test report. where is the 5%-10% perf increase visible ?
<gonidelis[m]>
in the report i mean
<hkaiser>
gonidelis[m]: the perf test does not use the new scheduler
<hkaiser>
it has been deprecated in C++20, so #5864 makes sure it's not used anymore
<gonidelis[m]>
the macro name is different though ;p
<hkaiser>
is it?
<gonidelis[m]>
INIT_FLAG
<gonidelis[m]>
FLAG_INIT
<hkaiser>
ahh, it's just a typo, thanks
<hkaiser>
will fix in the PR
<gonidelis[m]>
what's its purpose though
<gonidelis[m]>
why initialize atomic_flag ?
<hkaiser>
read the cppref page
<gonidelis[m]>
why need to ^^
<gonidelis[m]>
yes that's a question on cppref
<hkaiser>
let's talk Thu
diehlpk has joined #ste||ar
K-ballo has quit [Quit: K-ballo]
diehlpk has quit [Quit: Leaving.]
diehlpk has joined #ste||ar
diehlpk has quit [Ping timeout: 240 seconds]
diehlpk has joined #ste||ar
diehlpk has left #ste||ar [#ste||ar]
hkaiser has quit [Quit: Bye!]
Yorlik has joined #ste||ar
hkaiser has joined #ste||ar
K-ballo has joined #ste||ar
K-ballo has quit [Read error: Connection reset by peer]
K-ballo has joined #ste||ar
K-ballo has quit [Read error: Connection reset by peer]
K-ballo has joined #ste||ar
<satacker[m]>
In case of class inheriting `tag_fallback_noexcept` how do I make sure that the tag dispatching takes place when user does incorrect return type overload?
<hkaiser>
satacker[m]: how do you overload on the return type
<hkaiser>
I don't think that's possible
<satacker[m]>
hkaiser: I meant `tag_invoke`
<satacker[m]>
* Sorry, I meant
<hkaiser>
C++ doesn't allow you to overload functions based on the return type
<satacker[m]>
hkaiser: yes, i completely used a wrong terminology. Say tag_dispatching using `tag_fallback_noexcept`, when will the user's `tag_invoke` be called?
<hkaiser>
tag_invoke will be tried first (i.e. used if it is valid), tag_fallback_invoke will be used (if it is valid) only if no tag_invoke's are available
<satacker[m]>
hkaiser: Thanks, but how do I make sure the tag_invoke return type is valid, not possible?
<satacker[m]>
(The tag_invoke which user implements)
<hkaiser>
it wil fail compiling if not
<satacker[m]>
Okay, thanks, probably I have done some other mistake, because it compiles, only static_assert fails due to other reasons.
<K-ballo>
how can it compile if static_assert fails?
<gonidelis[m]>
hkaiser: could you please check your email?
<hkaiser>
gonidelis[m]: will do
<gonidelis[m]>
Thanks!
<hkaiser>
gonidelis[m]: see pm, pls
<Yorlik>
o/
<hkaiser>
\o
<Yorlik>
How's HPX doing these days? I've been quite a bit out of the loop. We're still at 1.7.1 and working on connecting Unreal Engine 5 to the server.
<Yorlik>
Seems everyone is busy - gotta run. See you another day :)
* Yorlik
waves and fades
Yorlik has quit [Quit: Leaving]
<diehlpk_work>
hkaiser, Ok, boost does not support cross compilation using the Fujitsu compiler
<hkaiser>
diehlpk_work: nod, do we need the Fujitsu compiler?
<diehlpk_work>
hkaiser, Do we have a bug in the hello_world_distributed?
<diehlpk_work>
If I run it without mpiexec, it shows 0 loc with 48 cores
<hkaiser>
diehlpk_work: if you run it without mpiexec it will create one locality
<diehlpk_work>
If I run with mpiexec it shows 0 loc with 48 cores as well, but the cores are printed multiple times
<diehlpk_work>
Wenn wir > 32 numa domain haben dann wird HPX schlechter
<diehlpk_work>
Nan laedt dich ein
<diehlpk_work>
hkaiser, I assume that hpx does not read the env from the new scheduler correct and therefore a single node run is slower as on Ookami using slurm
<hkaiser>
yes, I'm surprised anything works at all
K-ballo has quit [Ping timeout: 250 seconds]
K-ballo has joined #ste||ar
diehlpk_work has quit [Remote host closed the connection]