hkaiser changed the topic of #ste||ar to: STE||AR: Systems Technology, Emergent Parallelism, and Algorithm Research | stellar-group.org | HPX: A cure for performance impaired parallel applications | github.com/STEllAR-GROUP/hpx | This channel is logged: irclog.cct.lsu.edu | Everybody: please respond to the documentation survey: https://forms.gle/aCpNkhjSfW55isXGA
diehlpk has joined #ste||ar
<hkaiser> diehlpk: hey, see pm, pls
diehlpk has quit [Quit: Leaving.]
K-ballo has quit [Quit: K-ballo]
diehlpk has joined #ste||ar
hkaiser has quit [Quit: Bye!]
diehlpk has quit [Quit: Leaving.]
diehlpk has joined #ste||ar
diehlpk has quit [Quit: Leaving.]
Yorlik_ has joined #ste||ar
Yorlik_ is now known as Yorlik
Yorlik has quit [Read error: Connection reset by peer]
<ms[m]> diehlpk_work: perfect, thanks!
Yorlik has joined #ste||ar
diehlpk has joined #ste||ar
hkaiser has joined #ste||ar
diehlpk has quit [Quit: Leaving.]
<gnikunj[m]> ms: did you understand my explanation on #5445?
<hkaiser> gnikunj[m]: I think I found the problem causing the asan errors
<hkaiser> thanks for spending time investigating
<gnikunj[m]> hkaiser: you did? What's causing them?
<hkaiser> my stupidity, as always :/
<gnikunj[m]> hkaiser: well well well... what will we mere plebians do if you start calling yourself stupid ;-)
<gonidelis[m]> lol
<hkaiser> gnikunj[m]: rule no 1: if you use reference counting, always make sure it doesn't drop to zero prematurely ;-)
K-ballo has joined #ste||ar
<gnikunj[m]> I will add that to my list of rules to follow ;-)
<gonidelis[m]> i thought rule no. 1 was the informational-field
<hkaiser> ;-)
<hkaiser> gonidelis[m]: world view my friend, world view
diehlpk has joined #ste||ar
diehlpk has quit [Quit: Leaving.]
<gnikunj[m]> hkaiser: does execution policy seq(task) runs the algorithm sequentially on another HPX thread? (it returns a future)
<hkaiser> gnikunj[m]: yes
<gnikunj[m]> hkaiser: understood. I'm still wrapping my head around why we would want that though
<hkaiser> gnikunj[m]: it's mostly for symmetry reasons
diehlpk has joined #ste||ar
<ms[m]> gnikunj: it's useful if you know you don't have enough work for parallel execution, but you want to start some other work in the meantime on the current thread
<ms[m]> and yes, saw your explanation, and I even had a response to it but I see I never posted it
<ms[m]> now what remains is this: could you just make the two spinlocks look the same so that a future you (or me or anyone else) doesn't need to wonder why one is using while () {} and the other is using do {} while(), and what's different between the two?
<ms[m]> hkaiser: could I ask you to remove email addresses that you know don't exist anymore from point 25 here: https://github.com/STEllAR-GROUP/hpx/blob/master/docs/sphinx/contributing/release_procedure.rst? I keep getting bounces from many of them, but I don't know if it's because they're inactive or just because I'm not allowed to send messages to them
<hkaiser> ms[m]: will do!
<ms[m]> thank you!
<gnikunj[m]> <ms[m] "now what remains is this: could "> I can make them both while() and make it to an extra atomic load. What do you suggest?
<gnikunj[m]> there isn't a real way to make them look identical without doing an additional (redundant) operation
<gnikunj[m]> ms: ^^
<hkaiser> ms[m]: just fyi: all gpu builders on rostam are down currently because Alireza is still physically moving nodes around
<hkaiser> something has changed however as the other builders do not seem to fail anymore
<hkaiser> I know Alireza has updated some configurations etc, so this might have been the issue
hkaiser has quit [Read error: Connection reset by peer]
hkaiser has joined #ste||ar
diehlpk has quit [Quit: Leaving.]
diehlpk has joined #ste||ar
<ms[m]> gnikunj: there isn't any way to make them identical full stop, or there isn't any way without adding/removing some helper functions? they're both spinlocks
<ms[m]> hkaiser: perfect, thank you
<ms[m]> let's hope that was it then
<ms[m]> is he going to announce when they're back up on the mailing list? just wondering if we'll need to make changes to the ci configurations afterwards
<hkaiser> ms[m]: yes
<hkaiser> that's the plan
<gonidelis[m]> hkaiser: do we meet now/
<gonidelis[m]> ?
<hkaiser> there will be new slurm partitions, so we might need changes to the ci
<hkaiser> gonidelis[m]: if you want, sure
<gonidelis[m]> ok
<ms[m]> ok, thanks
<gonidelis[m]> i am in
diehlpk has quit [Quit: Leaving.]
diehlpk has joined #ste||ar
diehlpk has quit [Client Quit]
diehlpk_work has joined #ste||ar
<gnikunj[m]> hkaiser: just to make sure, hpx::launch::sync to a continuation means that it will use the HPX thread stack from previous future and work on it without creating a new thread, right? Or does it simply means that it'll be a blocking async but still return a future?
<hkaiser> hpx::launch::sync causes the thread that makes the future ready to keep on going and to execute the continuations as well
<gnikunj[m]> right, so I remembered correctly
diehlpk has joined #ste||ar
diehlpk has quit [Quit: Leaving.]
diehlpk has joined #ste||ar
diehlpk has quit [Quit: Leaving.]
<srinivasyadav227> gnikunj: hkaiser Thank you so much for the first phase GSoC evaluations, and the nice feedback! ;-)
<hkaiser> srinivasyadav227: thanks for your work!
<srinivasyadav227> ;-)