weilewei has quit [Remote host closed the connection]
diehlpk_work has quit [Remote host closed the connection]
hkaiser has quit [Quit: bye]
shahrzad_ has quit [Remote host closed the connection]
akheir has quit [Quit: Leaving]
jbjnr has joined #ste||ar
bita_ has quit [Ping timeout: 240 seconds]
hkaiser has joined #ste||ar
akheir has joined #ste||ar
sestro[m] has joined #ste||ar
jbjnr_ has joined #ste||ar
<ms[m]>
hi sestro, I don't think there's any real fundamental reason why that couldn't be done
jbjnr has quit [Ping timeout: 260 seconds]
<ms[m]>
I've simply restricted at the time it was implemented because doing it with multiple localities was untested and not needed for our use cases at the time
<ms[m]>
I'd recommend you open an issue for that and I can try to enable that to work with multiple localities (it'd be interesting to hear your use case as well, I haven't really heard of anyone external trying to use the runtime suspension feature!)
<sestro[m]>
okay, that sounds promising.
<sestro[m]>
my use case is using hpx in a small part of a rather complex simulation framework (c++ libs wrapped using python) where different components use different parallelization schemes
<sestro[m]>
and I have to ensure there is no interference between hpx and whatever other modules are using. so I assumed completely suspending the runtime in between might be the safest option
<ms[m]>
sestro: sounds good
<ms[m]>
are the other libs using openmp or some other runtime? how do you do communication in the non-hpx parts?
<sestro[m]>
yes, some use openmp, others are using plain threads afaik. the communication is done in mpi and/or gaspi.
<ms[m]>
note that you can also use plain mpi with explicitly with hpx if you wish
<ms[m]>
it's not as nice as using the hpx parcelports under the hood, but it is another alternative as well
jbjnr_ has quit [Ping timeout: 256 seconds]
<ms[m]>
in that case each mpi process is an independent hpx locality
weilewei has joined #ste||ar
<sestro[m]>
ms: yeah, that is what I'm currently doing. but I wouldn't mind to use the parcelports at one point.
hkaiser has quit [Read error: Connection reset by peer]
hkaiser has joined #ste||ar
bita_ has joined #ste||ar
diehlpk_work has joined #ste||ar
<ms[m]>
zao, diehlpk: ever try to use openmpi with clang? we're getting an `-fstack-clash-protection` flag from a gcc build of openmpi, which then clang doesn't like, wondering if this sounds familiar to either of you?
<zao>
All our occurrences of Clang is on a non-MPI level.
weilewei has quit [Remote host closed the connection]
weilewei has joined #ste||ar
<gnikunj[m]>
diehlpk: done with gsoc evaluation
sestro[m] has left #ste||ar ["User left"]
<zao>
The flag rings some vague bells, but nothing concrete.
hkaiser_ has joined #ste||ar
hkaiser has quit [Ping timeout: 260 seconds]
jbjnr_ has joined #ste||ar
weilewei has quit [Remote host closed the connection]
<diehlpk_work>
ms[m], No, I only used clang without mpi
<diehlpk_work>
I compile my code using clang on my local machine
<akheir>
ms[m], I've compiled openmpi 4 with clang 10. The module is available when you load clang. Give that a try
K-ballo has quit [*.net *.split]
K-ballo has joined #ste||ar
weilewei has joined #ste||ar
diehlpk has joined #ste||ar
<ms[m]>
akheir, zao, diehlpk_work thanks!
<gnikunj[m]>
hkaiser_: yt?
jbjnr__ has joined #ste||ar
jbjnr_ has quit [Read error: Connection reset by peer]
jbjnr__ has quit [Remote host closed the connection]
jbjnr__ has joined #ste||ar
diehlpk has quit [Ping timeout: 260 seconds]
weilewei has quit [Remote host closed the connection]
weilewei has joined #ste||ar
<gonidelis[m]>
hkaiser_: fixed the "begin-end iterator| mistake
<hkaiser_>
gonidelis[m]: \o/
<gonidelis[m]>
"begin-end iterator" ^^
<gonidelis[m]>
hkaiser_: I am going to take a look into why the tests aren't invoked now ;)
<gonidelis[m]>
and add sentinel tests
<gonidelis[m]>
but everything should compile thus far
<parsa>
hkaiser_: ping
<hkaiser_>
here
<parsa>
hkaiser_: do you have ~10 minutes for a quick Zoom call?