hkaiser changed the topic of #ste||ar to: STE||AR: Systems Technology, Emergent Parallelism, and Algorithm Research | stellar-group.org | HPX: A cure for performance impaired parallel applications | github.com/STEllAR-GROUP/hpx | This channel is logged: irclog.cct.lsu.edu
scofield_zliu has joined #ste||ar
<Aarya[m]> Okay so I have finished building both hpx and hpxc rtohid
scofield_zliu has quit [Ping timeout: 248 seconds]
<rtohid[m]> <Aarya[m]> "Okay so I have finished building..." <- 👍. Next step would be building openmp:
K-ballo has quit [Ping timeout: 260 seconds]
K-ballo1 has joined #ste||ar
K-ballo1 is now known as K-ballo
hkaiser has joined #ste||ar
Yorlik__ has joined #ste||ar
Guest53 has joined #ste||ar
Yorlik_ has quit [Ping timeout: 246 seconds]
Guest53 has quit [Ping timeout: 260 seconds]
Guest53 has joined #ste||ar
<Guest53> Any resources I can look into to learn how execution policies are implemented in C++?
<Guest53> Are they called using pragmas? like
<Guest53>         struct unseq_loop_n_ind
<Guest53>         {
<Guest53>             template <typename InIter, typename F>
<Guest53>             HPX_HOST_DEVICE HPX_FORCEINLINE static InIter call(
<Guest53>                 InIter HPX_RESTRICT it, std::size_t num, F&& f)
<Guest53>             {
<Guest53>                 // clang-format off
<Guest53>                 HPX_IVDEP HPX_UNROLL HPX_VECTORIZE
<Guest53>                 for (std::size_t i = 0; i != num; ++i)
<Guest53>                 {
<Guest53>                     HPX_INVOKE(f, *it);
<Guest53>                     ++it;
<Guest53>                 }
<Guest53>                 // clang-format on
<Guest53>                 return it;
<Guest53>             }
<Guest53>         };
<Guest53> is there somewhere in the hpx documentation I can look into for it?
<srinivasyadav18[> Guest53: You mean, how execution policies are implemented in hpx ? or std::execution ?
Yorlik_ has joined #ste||ar
<srinivasyadav18[> Guest53: The pragmas are used in helper functions like unseq_loop, which are called by parallel-algorithms (like for_each, transform etc..) in HPX when the first argument (execution policy) is unseq or par_unseq
<Guest53> how are they implemented in hpx?
<Guest53> """The pragmas are used in helper functions like unseq_loop, which are called by parallel-algorithms (like for_each, transform etc..) in HPX when the first argument (execution policy) is unseq or par_unseq"""
<Guest53> Any idea what this PR aims to do in that case?
Yorlik__ has quit [Ping timeout: 248 seconds]
<Guest53> I see that is defines a few structs but don't really understand how they connect to execution policies
<srinivasyadav18[> Guest53: okay, the execution policies are more used like tags
<Guest53> I am aware execution policies are used like for_each(hpx::execution:seq, it_begin, it_end)
<srinivasyadav18[> for example when for_each(hpx::execution::unseq, begin, end, func) is called, the internal loop implementation dispatches it different overload which (may) does vectorization using pragma's
<srinivasyadav18[> * for example when for_each(hpx::execution::unseq, begin, end, func) is called, the internal loop implementation dispatches it to different overload which (may) does vectorization using pragma's
<Guest53> ok
<Guest53> I am trying to look into implementing par_unseq execution policy, I was told some algorithms already have it implemented
<Guest53> any idea where I can look for their implementation
<srinivasyadav18[> in a gist, the for_each algorithm internally uses util::loop helper function. So, when for_each takes first argument as hpx::execution::seq, a different overload of util::loop is called from the for_each algorithm, and when for_each takes first argument as hpx::execution::unseq, another overload will be called
<srinivasyadav18[> Guest53: okay. I would suggest try to look here : https://github.com/STEllAR-GROUP/hpx/tree/master/libs/core/algorithms/include/hpx/parallel/unseq if you are trying to understand how existing algorithms are implemented
<srinivasyadav18[> Guest53: for example, here : https://github.com/STEllAR-GROUP/hpx/blob/master/libs/core/algorithms/include/hpx/parallel/unseq/loop.hpp#L207. We use tag_invoke to call the customized version of loop helper function when the execution policy is hpx::execution::unseq
<hkaiser> Guest53: it adds the unseq execution policies that were missing
<srinivasyadav18[> Guest53: those are the actual implemenations of execution policy unseq, par_unseq etc..
<Guest53> hpx::execution::unseq, another overload will be called
<Guest53> so the commit defines these overloads?
<srinivasyadav18[> Guest53: this commit (https://github.com/STEllAR-GROUP/hpx/pull/5889/commits/9659b525e8ff2b01d4a64d625900770483995d2b) only adds unseq execution policies
<srinivasyadav18[> Guest53: where as this PR https://github.com/STEllAR-GROUP/hpx/pull/5889, adds unseq overloads for for_each, transform and reduce algorithms
<Guest53> I do find tests for the for_each, rotate methods
<Guest53> but am not able to pin point where the support for the execution policy was added
<srinivasyadav18[> you want to find the location of execution policy or on which commit it was added ?
<Guest53> location might help
<Guest53>  https://github.com/STEllAR-GROUP/hpx/pull/5889 I had tried going through this commit but can only discern that structs corresponding to the execution policy have been added, I don't understand how they connect with the use of pragmas to parallelise / vectorize the process
<hkaiser> Guest53: that commit just added the policies but not the implementation of the algorithms, at that point the unseq policies simply fell back to the non-unseq implementations
<Guest53> ohkay, from my understanding of how to implement a execution policy like par_unseq would be as described in this PR, right?
<hkaiser> which PR?
<hkaiser> Guest53: more like here: https://github.com/STEllAR-GROUP/hpx/pull/6018
<Guest53> sure will go through the it
<Guest53> https://github.com/STEllAR-GROUP/hpx/blob/master/libs/core/algorithms/include/hpx/parallel/unseq/loop.hpp does this file have the overloads which get called when I try to use the unseq execution policy
<hkaiser> yes
<Guest53> ok thank you, will ping back if I need any help
<Guest53> also the documentation mentions about HPX on a task based model
<hkaiser> Guest53: please note that we have not done any performance test of those algorithms #6018 implements, that would be nice to have as well
<Guest53> so is the call like for_each treated as a single task which uses pragmas to parallelise the operation?
<hkaiser> Guest53: unseq: yes, par_unseq: launches new tasks
<Guest53> ok, do you think it'd help if i used gdb to just go through how the execution policy is implemented?
<hkaiser> absolutely
<hkaiser> gtg now, ttyl
hkaiser has quit [Quit: Bye!]
<Guest53> sure, I understand that finally the execution is done using loops headed by pragmas but am rather confused with how all the various templated structs help get there
tufei_ has quit [Remote host closed the connection]
tufei_ has joined #ste||ar
tufei_ has quit [Remote host closed the connection]
tufei_ has joined #ste||ar
Yorlik__ has joined #ste||ar
Guest53 has quit [Ping timeout: 260 seconds]
Yorlik_ has quit [Ping timeout: 248 seconds]
<satacker[m]> <hkaiser> "strange, it never failed..." <- I am skeptical about the /run/media mount. Something ticks me off about it.
Guest53 has joined #ste||ar
<Aarya[m]> I'm getting "error: no member named 'bit_floor' in namespace 'llvm'; did you mean '__floor'?" when building openmp
<Aarya[m]> using clang
<satacker[m]> Did you mean clang with openmp support?
<Aarya[m]> No just openmp ig
<Aarya[m]> s/ig/I guess /
Guest53 has quit [Quit: Client closed]
<Aarya[m]> Just have to go into the openmp directory and build accordingly, right?
<satacker[m]> I don't get what you are trying to build
<Aarya[m]> > <@rtohid:matrix.org> 👍. Next step would be building openmp:
<Aarya[m]> This
Guest53 has joined #ste||ar
Guest53 has quit [Ping timeout: 260 seconds]
ChanServ has quit [*.net *.split]
sarkar_t[m] has quit [*.net *.split]
gdaiss[m] has quit [*.net *.split]
Yorlik__ has quit [*.net *.split]
hhn[m] has quit [*.net *.split]
pansysk75[m] has quit [*.net *.split]
KhushiBalia[m] has quit [*.net *.split]
satacker[m] has quit [*.net *.split]
TanishqJain[m] has quit [*.net *.split]
PritKanadiya[m] has quit [*.net *.split]
ms[m]1 has quit [*.net *.split]
zao has quit [*.net *.split]
refade6897[m] has quit [*.net *.split]
rori[m] has quit [*.net *.split]
rtohid[m] has quit [*.net *.split]
talij55561[m] has quit [*.net *.split]
Aarya[m] has quit [*.net *.split]
mdiers[m] has quit [*.net *.split]
tufei_ has quit [*.net *.split]
srinivasyadav18[ has quit [*.net *.split]
dkaratza[m] has quit [*.net *.split]
VedantNimje[m] has quit [*.net *.split]
gonidelis[m] has quit [*.net *.split]
K-ballo has quit [*.net *.split]
sivoais has quit [*.net *.split]
Kalium has quit [*.net *.split]
Yorlik__ has joined #ste||ar
ms[m]1 has joined #ste||ar
K-ballo has joined #ste||ar
tufei_ has joined #ste||ar
rtohid[m] has joined #ste||ar
dkaratza[m] has joined #ste||ar
PritKanadiya[m] has joined #ste||ar
srinivasyadav18[ has joined #ste||ar
VedantNimje[m] has joined #ste||ar
KhushiBalia[m] has joined #ste||ar
pansysk75[m] has joined #ste||ar
Aarya[m] has joined #ste||ar
sarkar_t[m] has joined #ste||ar
gdaiss[m] has joined #ste||ar
zao has joined #ste||ar
rori[m] has joined #ste||ar
Kalium has joined #ste||ar
gonidelis[m] has joined #ste||ar
satacker[m] has joined #ste||ar
TanishqJain[m] has joined #ste||ar
refade6897[m] has joined #ste||ar
sivoais has joined #ste||ar
hhn[m] has joined #ste||ar
talij55561[m] has joined #ste||ar
mdiers[m] has joined #ste||ar
ChanServ has joined #ste||ar
Louis76 has joined #ste||ar
Louis76 has quit [Client Quit]
RostamLog_ has joined #ste||ar
RostamLog has quit [Ping timeout: 248 seconds]
Guest53 has joined #ste||ar
Guest53 has quit [Ping timeout: 260 seconds]
Nei| has joined #ste||ar
K-ballo1 has joined #ste||ar
K-ballo has quit [Ping timeout: 268 seconds]
K-ballo1 is now known as K-ballo
<Nei|> hey , are you participating this year?
Yorlik__ is now known as Yorlik
<zao> Hi to all the GSoC aspirants, nice to see some enthusiasm here :D
<zao> In general, it's a good idea to get the library and examples building on your system and get a feel for how the thing is supposed to be used. There's probably some talks or presentations on the design as well somewhere.
Nei|98 has joined #ste||ar
Nei| has quit [Ping timeout: 260 seconds]
Nei| has joined #ste||ar
Nei| has quit [Client Quit]
Nei|98 has quit [Client Quit]
Nei| has joined #ste||ar
Nei| has quit [Client Quit]
Nei| has joined #ste||ar
scofield_zliu has joined #ste||ar
Nei| has quit [Quit: Client closed]
Neeraj has joined #ste||ar
Neeraj has quit [Quit: Client closed]
hkaiser has joined #ste||ar
hkaiser has quit [Quit: Bye!]
<satacker[m]> hkaiser: I'll have to impl adl isolation for all of the execution:: algorithms
tufei_ has quit [Remote host closed the connection]
tufei__ has joined #ste||ar
<satacker[m]> including as_sender_sender
<Aarya[m]> Hi I have completed building clang with openmp
<rtohid[m]> <Aarya[m]> "Hi I have completed building..." <- Now it's time to link openmp against hpxc. You'd need to make a couple of changes in the build system:
<rtohid[m]> Aarya: and set the flags: -DHPX_DIR=${HPX_DIR} -DWITH_HPXC=ON
tufei__ has quit [Remote host closed the connection]
tufei__ has joined #ste||ar
<Aarya[m]> Umm I think your fork is a bit behind from the latest main
<Aarya[m]> Done
<Aarya[m]> It's building now
<Aarya[m]> Don't we have to give HPXC_DIR also?
<rtohid[m]> <Aarya[m]> "Don't we have to give HPXC_DIR..." <- Yes, you do
<Aarya[m]> Okay added. It's building
<rtohid[m]> * Aarya: and set the flags: -DHPX\_DIR=${HPX\_DIR} -DWITH\_HPXC=ON -DHPXC\_DIR=${HPXC\_DIR}
K-ballo1 has joined #ste||ar
K-ballo has quit [Ping timeout: 268 seconds]
K-ballo1 is now known as K-ballo
LouisP has joined #ste||ar
LouisP has quit [Remote host closed the connection]
LouisP2 has joined #ste||ar
AbhishekYadav[m] has joined #ste||ar
tufei_ has joined #ste||ar
tufei__ has quit [Remote host closed the connection]
<Aarya[m]> Got this "libhpx_hpxc.a: No such file or directory"
<Aarya[m]> The hpxc does not generate this. It generates libhpxcd.a
<rtohid[m]> <Aarya[m]> "The hpxc does not generate this...." <- Seems right https://github.com/rtohid/hpxc/blob/a3ff6d58358efa6b2eee2fc5b9365b96ec4d1295/src/CMakeLists.txt#L37
<rtohid[m]> It's `libhpx_hpxc.a` on my system though. hkaiser gonidelis has the cmake macro changed?
<Aarya[m]> Also the path it was looking for in was cmake-install which isn't created either
<rtohid[m]> Did you install?
<Aarya[m]> I did these
<Aarya[m]> cmake --build cmake-build/ --parallel
<Aarya[m]> * ````cmake -S . -DHPX_DIR=/path/to/hpx/lib/cmake/HPX -B cmake-build/
<Aarya[m]> cmake --build cmake-build/ --parallel
<Aarya[m]> ````
<Aarya[m]> * ```cmake -S . -DHPX_DIR=/path/to/hpx/lib/cmake/HPX -B cmake-build/
<Aarya[m]> ```
<Aarya[m]> cmake --build cmake-build/ --parallel
<Aarya[m]> * ```
<Aarya[m]> ```
<Aarya[m]> cmake -S . -DHPX_DIR=/path/to/hpx/lib/cmake/HPX -B cmake-build/
<Aarya[m]> ```cmake --build cmake-build/ --parallel
<Aarya[m]> * ```
<Aarya[m]> cmake --build cmake-build/ --parallel
<Aarya[m]> ```
<rtohid[m]> You haven’t installed the dependencies, which is totally fine, just use the build paths instead
<Aarya[m]> Yeah used them only
scofield_zliu has quit [Ping timeout: 248 seconds]
hkaiser has joined #ste||ar
tufei__ has joined #ste||ar
tufei_ has quit [Remote host closed the connection]
diehlpk_work has joined #ste||ar
tufei_ has joined #ste||ar
tufei__ has quit [Remote host closed the connection]
scofield_zliu has joined #ste||ar
diehlpk_work has quit [Remote host closed the connection]
LouisP2 has quit [Quit: LouisP2]