hkaiser changed the topic of #ste||ar to: STE||AR: Systems Technology, Emergent Parallelism, and Algorithm Research | stellar-group.org | HPX: A cure for performance impaired parallel applications | github.com/STEllAR-GROUP/hpx | This channel is logged: irclog.cct.lsu.edu
K-ballo has quit [Quit: K-ballo]
hkaiser has quit [Quit: Bye!]
<mdiers[m]> I have a small question about the changes in the configuration regarding the serialization. There is now the block:... (full message at https://libera.ems.host/_matrix/media/r0/download/libera.chat/b5c28c1180984de40d698fabc0604eda5af2c585)
<mdiers[m]> s///
<mdiers[m]> * I have a small question about the changes in the configuration regarding the serialization. There is now the block:... (full message at https://libera.ems.host/_matrix/media/r0/download/libera.chat/8a2988322f45da441568f99e2c3247c35a8a2889)
K-ballo has joined #ste||ar
parsa[fn] has joined #ste||ar
parsa[fn] has quit [Client Quit]
hkaiser has joined #ste||ar
diehlpk_work has joined #ste||ar
<gonidelis[m]> What does it mean to "move work to data rather than moving data to work"?
<hkaiser> gonidelis[m]: executing threads next to wehere data is located instead of shipping the data over to where the code runs
<gonidelis[m]> Is it literally the notion of moving the thread objects rather than the container partitions?
<hkaiser> in a sense yes
<gonidelis[m]> next to?
<hkaiser> just that we don't move threads, we create new ones
<hkaiser> execute threads next to where the data is located
<gonidelis[m]> hmm ok i get the "Create new threads" part
<gonidelis[m]> you mean in memory?
<gonidelis[m]> what does "next to" refer to ?
<hkaiser> yes, or to the node that holds the data
<gonidelis[m]> locality?
<hkaiser> yes
<gonidelis[m]> hmmm
<gonidelis[m]> so option 1. wer partition the data, distributed to the resources and then initate work locally (that's HPX)
<gonidelis[m]> whats option 2? what's a counterexample?
<hkaiser> moving the data to where the code is running
<gonidelis[m]> distribute it to the resources i meant ^^
<hkaiser> that's what MPI is doing
<gonidelis[m]> yeah i mean is there a runtime that does that?
<gonidelis[m]> aha
<gonidelis[m]> the code is running on multiple machines though? no?
<gonidelis[m]> in mpi i mean
<hkaiser> yes
<hkaiser> gonidelis[m]: we always say that we _prefer_ moving work instead of data
<gonidelis[m]> lemme check how mpi moves data
<hkaiser> using mpi_send/mpi_recv
<gonidelis[m]> !!!
<gonidelis[m]> thanks!
<gonidelis[m]> i thought send/rec was just for signaling, had forgotten the buffering part
<gonidelis[m]> makes sense now
sarkar_t[m] has joined #ste||ar
K-ballo has quit [Quit: K-ballo]
K-ballo has joined #ste||ar
<gonidelis[m]> hkaiser: is `HPX_SMT_PAUSE` a way to achieve spinlock in order to keep the thread alive?
<gnikunj[m]> gonidelis: where'd you find that in the code? (the spinlock implementation?)
<gnikunj[m]> Aah, no that's not a way to "achieve" spinlock. It's to tell the CPU that the current execution code is part of a spinlock so the CPU can optimize it accordingly (for instance adding PAUSE instruction in case of x86 - see _mm_pause documentation)
<gnikunj[m]> it saves a few CPU cycles by doing so and using the PAUSE instruction, you end up saving power too (since you do nothing)
<gonidelis[m]> Who implements the spinlock if not hpx?
<gnikunj[m]> Didn't quite get your question here. HPX has spinlock implementations. See - https://github.com/STEllAR-GROUP/hpx/blob/ae45453ace32e6f5da0af9b63e09cae0876e3ada/libs/core/thread_support/src/spinlock.cpp
<hkaiser> gonidelis[m]: it's a no-op that allows to reduce the pressure on the execution pipeline
<hkaiser> we talked about that, I believe
<gonidelis[m]> hkaiser: yes we talked about it. i didn't know that this was it
<gnikunj[m]> gonidelis: if you're taking a look at HPX's spinlocks. You may want to take a look here too - https://github.com/NK-Nikunj/Cpp-Locks
<gnikunj[m]> I wanted to improve the performance of HPX locks and wrote some widely used spinlock techniques. You may want to do some testing on it and integrate things back to HPX :P
<gonidelis[m]> never got merged?
<gonidelis[m]> sounds interesting
<gonidelis[m]> i mean integrated
<gonidelis[m]> *
<gonidelis[m]> ^^
<gnikunj[m]> I could never do in-depth performance analysis and the required updates to the implementation to actually integrate things back in HPX. The locks I implemented here reduces cache traffic.
<gonidelis[m]> Niche
diehlpk_work has quit [Remote host closed the connection]