hkaiser changed the topic of #ste||ar to: STE||AR: Systems Technology, Emergent Parallelism, and Algorithm Research | stellar-group.org | HPX: A cure for performance impaired parallel applications | github.com/STEllAR-GROUP/hpx | This channel is logged: irclog.cct.lsu.edu
scofield_zliu has joined #ste||ar
<Aarya[m]>
Okay so I have finished building both hpx and hpxc rtohid
scofield_zliu has quit [Ping timeout: 248 seconds]
<rtohid[m]>
<Aarya[m]> "Okay so I have finished building..." <- 👍. Next step would be building openmp:
<Guest53>
is there somewhere in the hpx documentation I can look into for it?
<srinivasyadav18[>
Guest53: You mean, how execution policies are implemented in hpx ? or std::execution ?
Yorlik_ has joined #ste||ar
<srinivasyadav18[>
Guest53: The pragmas are used in helper functions like unseq_loop, which are called by parallel-algorithms (like for_each, transform etc..) in HPX when the first argument (execution policy) is unseq or par_unseq
<Guest53>
how are they implemented in hpx?
<Guest53>
"""The pragmas are used in helper functions like unseq_loop, which are called by parallel-algorithms (like for_each, transform etc..) in HPX when the first argument (execution policy) is unseq or par_unseq"""
<Guest53>
Any idea what this PR aims to do in that case?
<Guest53>
I see that is defines a few structs but don't really understand how they connect to execution policies
<srinivasyadav18[>
Guest53: okay, the execution policies are more used like tags
<Guest53>
I am aware execution policies are used like for_each(hpx::execution:seq, it_begin, it_end)
<srinivasyadav18[>
for example when for_each(hpx::execution::unseq, begin, end, func) is called, the internal loop implementation dispatches it different overload which (may) does vectorization using pragma's
<srinivasyadav18[>
* for example when for_each(hpx::execution::unseq, begin, end, func) is called, the internal loop implementation dispatches it to different overload which (may) does vectorization using pragma's
<Guest53>
ok
<Guest53>
I am trying to look into implementing par_unseq execution policy, I was told some algorithms already have it implemented
<Guest53>
any idea where I can look for their implementation
<srinivasyadav18[>
in a gist, the for_each algorithm internally uses util::loop helper function. So, when for_each takes first argument as hpx::execution::seq, a different overload of util::loop is called from the for_each algorithm, and when for_each takes first argument as hpx::execution::unseq, another overload will be called
<Guest53>
https://github.com/STEllAR-GROUP/hpx/pull/5889 I had tried going through this commit but can only discern that structs corresponding to the execution policy have been added, I don't understand how they connect with the use of pragmas to parallelise / vectorize the process
<hkaiser>
Guest53: that commit just added the policies but not the implementation of the algorithms, at that point the unseq policies simply fell back to the non-unseq implementations
<Guest53>
ohkay, from my understanding of how to implement a execution policy like par_unseq would be as described in this PR, right?
<Guest53>
ok thank you, will ping back if I need any help
<Guest53>
also the documentation mentions about HPX on a task based model
<hkaiser>
Guest53: please note that we have not done any performance test of those algorithms #6018 implements, that would be nice to have as well
<Guest53>
so is the call like for_each treated as a single task which uses pragmas to parallelise the operation?
<hkaiser>
Guest53: unseq: yes, par_unseq: launches new tasks
<Guest53>
ok, do you think it'd help if i used gdb to just go through how the execution policy is implemented?
<hkaiser>
absolutely
<hkaiser>
gtg now, ttyl
hkaiser has quit [Quit: Bye!]
<Guest53>
sure, I understand that finally the execution is done using loops headed by pragmas but am rather confused with how all the various templated structs help get there
tufei_ has quit [Remote host closed the connection]
tufei_ has joined #ste||ar
tufei_ has quit [Remote host closed the connection]
tufei_ has joined #ste||ar
Yorlik__ has joined #ste||ar
Guest53 has quit [Ping timeout: 260 seconds]
Yorlik_ has quit [Ping timeout: 248 seconds]
<satacker[m]>
<hkaiser> "strange, it never failed..." <- I am skeptical about the /run/media mount. Something ticks me off about it.
Guest53 has joined #ste||ar
<Aarya[m]>
I'm getting "error: no member named 'bit_floor' in namespace 'llvm'; did you mean '__floor'?" when building openmp
<Aarya[m]>
using clang
<satacker[m]>
Did you mean clang with openmp support?
<Aarya[m]>
No just openmp ig
<Aarya[m]>
s/ig/I guess /
Guest53 has quit [Quit: Client closed]
<Aarya[m]>
Just have to go into the openmp directory and build accordingly, right?
<satacker[m]>
I don't get what you are trying to build
<Aarya[m]>
> <@rtohid:matrix.org> 👍. Next step would be building openmp:
<zao>
Hi to all the GSoC aspirants, nice to see some enthusiasm here :D
<zao>
In general, it's a good idea to get the library and examples building on your system and get a feel for how the thing is supposed to be used. There's probably some talks or presentations on the design as well somewhere.
<satacker[m]>
hkaiser: I'll have to impl adl isolation for all of the execution:: algorithms
tufei_ has quit [Remote host closed the connection]
tufei__ has joined #ste||ar
<satacker[m]>
including as_sender_sender
<Aarya[m]>
Hi I have completed building clang with openmp
<rtohid[m]>
<Aarya[m]> "Hi I have completed building..." <- Now it's time to link openmp against hpxc. You'd need to make a couple of changes in the build system: