hkaiser changed the topic of #ste||ar to: STE||AR: Systems Technology, Emergent Parallelism, and Algorithm Research | stellar-group.org | HPX: A cure for performance impaired parallel applications | github.com/STEllAR-GROUP/hpx | This channel is logged: irclog.cct.lsu.edu
<rtohid[m]> <Aarya[m]> "rtohid: what should I do next" <- First verify the build with and without dependency on hpxc, here's a minimal example:
<rtohid[m]> Once you've verified that, I'd suggest you start working on the proposal. You'd need to familiarize yourself with pthread and hpxc, and it is also recommended that you make some initial progress in the implementation
fares_atef has joined #ste||ar
<fares_atef> hello
<hkaiser> fares_atef: hello
<fares_atef> mr rod tohid told me that for the project (Study the performance of Halide applications running on HPX threads) on GSOC  i need to familiarize myself with the Halide language
<fares_atef> what is the next step ?
<fares_atef> and can i know more details about this project ?
<hkaiser> fares_atef: rod is here: rtohid[m]
<rtohid[m]> <fares_atef> "and can i know more details..." <- You’ll be implementing BLAS algorithms and more in Halide and will do performance comparison between your implementations and existing implementations
<rtohid[m]> You can find some primary work here:
<rtohid[m]> As a first step, I’d suggest you install HPX and Halide
<fares_atef> amazing, when i return from my university i will do that <3  .
tufei_ has joined #ste||ar
Yorlik_ has joined #ste||ar
Yorlik has quit [Ping timeout: 260 seconds]
Neeraj has joined #ste||ar
fares_atef has quit [Quit: Client closed]
Neeraj has quit [Client Quit]
fares_atef has joined #ste||ar
fares_atef has quit [Client Quit]
hkaiser has quit [Quit: Bye!]
Neeraj has joined #ste||ar
<Neeraj> Hii , I want to contribute in the project "Implement a Faster Associative Container for GIDs" I have some ideas on it also I mailed to mentor but he didn't reply.
Neeraj has quit [Client Quit]
<gonidelis[m]> Neeraj: these are legacy project ideas. you might wanna look for the most recent projectsa
Guest53 has joined #ste||ar
<Guest53> Hi I am trying to implement par_unseq execution policy for hpx algorithms
<Guest53> I have gone through the execution flow for hpx::reduce(hpx::execution::seq, v.begin(), v.end(), 0);
<Guest53> We seem to be repeatedly calling functions called tag_fallback_invoke, can someone help me figure out what this function does?
<gonidelis[m]> Guest53: they work as customization points
<Guest53> ok, thank you
<Guest53> any advice on how I can start with coding my implementation of par_unseq
<Guest53> do I need to work with all the customisation points, and appropriate calls or do I need to only implement the required overloads
<gonidelis[m]> have you looked into the implementation of other execution policies?
<Guest53> I have gone through seq for for_each and reduce, they seem to be mostly similar with a couple differences
<Guest53> for par, I had some trouble due to the threads begin spawned, but will try to figure it out too
<gonidelis[m]> these are algorithms implmenetations, not execution policies implementations
<Guest53> if you meant if I had looked at execution policy implementations in other projects like g++, clang. I haven't
<gonidelis[m]> i would advice you to look into the executors
<gonidelis[m]> no i meant within hpx
<Guest53> ok, will look into executors in HPX.
<gonidelis[m]> the executor
<Guest53> I had watched a couple of talks about executors in HPX and understand what they are. but am not sure how they are internally implemented
<Guest53> any advice on the files/documentation I need to go through to understand their implementation
<gonidelis[m]> sure
<gonidelis[m]> give me a sec
<Guest53> thank you
<gonidelis[m]> i would start with the parallel_executor
<Guest53> just to confirm, right now par_unseq falls back to some other execution policy, right?
<Guest53> like unseq?
<Guest53> previously I was told some algorithms have par_unseq implemented (https://github.com/STEllAR-GROUP/hpx/pull/5889/), so I assumed the appropriate executor was also implemented
<gonidelis[m]> ah you are reight
<gonidelis[m]> right*
<gonidelis[m]> vectorization requires separate algorithms implementation
<gonidelis[m]> srinivasyadav18: will be able to guide you through that
<gonidelis[m]> fwiw, the policies fall back to the executors not the other way around
<Guest53> ok, thank you. Will try to learn more about executors, policies.
<Guest53> I will try discussing with srinivasyadav18[
<gonidelis[m]> thanks
<gonidelis[m]> it's been stale for quite some time
<Guest53> ok, thank you
Guest53 has quit [Ping timeout: 260 seconds]
Guest534 has joined #ste||ar
Guest534 has quit [Ping timeout: 260 seconds]
<Aarya[m]> > <@rtohid:matrix.org> First verify the build with and without dependency on hpxc, here's a minimal example:
<Aarya[m]> How does this code verify if the bui;d is using hpxc
<Aarya[m]> * How does this code verify if the build is using hpxc
<Aarya[m]> Also getting the error https://pastebin.com/fdB4fVTc
Guest5384 has joined #ste||ar
Guest5384 has quit [Client Quit]
<Aarya[m]> When clang++ -fopenmp -I./hpx/install/include/ -I./hpx/install/lib/ -DHPXC test.cpp
<Aarya[m]> s//`/, s/I./hpx/install/lib/ -//, s//`/
<satacker[m]> A lot of HPX macros depend on cmake to test the features like integer sequence which is available from c++ 14 iirc.
RishabhBali has joined #ste||ar
K-ballo has quit [Ping timeout: 255 seconds]
K-ballo1 has joined #ste||ar
K-ballo1 is now known as K-ballo
<Aarya[m]> So what should I change?
<satacker[m]> Sorry I haven't played with HPXC but what I pointed out was intended for you to be able to figure out if that project CMakeLists have the same mechanism/have cxx std set to 14 or ahead.
strackar has joined #ste||ar
strackar has quit [Quit: Leaving]
RishabhBali has quit [Ping timeout: 260 seconds]
RishabhBali has joined #ste||ar
RishabhBali has quit [Ping timeout: 260 seconds]
tufei__ has joined #ste||ar
tufei_ has quit [Ping timeout: 255 seconds]
scofield_zliu has joined #ste||ar
RishabhBali has joined #ste||ar
tufei__ has quit [Remote host closed the connection]
tufei__ has joined #ste||ar
RishabhBali has quit [Quit: Client closed]
RishabhBali has joined #ste||ar
RishabhBali has quit [Client Quit]
tufei__ has quit [Remote host closed the connection]
tufei__ has joined #ste||ar
hkaiser has joined #ste||ar
RishabhBali has joined #ste||ar
hkaiser has quit [Quit: Bye!]
Guest20 has joined #ste||ar
Guest20 has quit [Client Quit]
RostamLog has joined #ste||ar
K-ballo1 has joined #ste||ar
K-ballo has quit [Ping timeout: 255 seconds]
K-ballo1 is now known as K-ballo
rishabhbali has quit [Ping timeout: 260 seconds]
rishabhbali has joined #ste||ar
diehlpk_work has joined #ste||ar
hkaiser has joined #ste||ar
tufei_ has joined #ste||ar
tufei__ has quit [Remote host closed the connection]
rishabhbali has quit [Quit: Client closed]
rishabhbali has joined #ste||ar
tufei_ has quit [Remote host closed the connection]
tufei_ has joined #ste||ar
LouisP has joined #ste||ar
rishabhbali has quit [Quit: Client closed]
tufei_ has quit [Remote host closed the connection]
tufei_ has joined #ste||ar
hkaiser has quit [Quit: Bye!]
fares_atef has joined #ste||ar
fares_atef has quit [Client Quit]
rishabhbali has joined #ste||ar
tufei__ has joined #ste||ar
tufei_ has quit [Remote host closed the connection]
diehlpk_work has quit [Ping timeout: 252 seconds]
diehlpk_work has joined #ste||ar
hkaiser has joined #ste||ar
HHN has joined #ste||ar
rishabhbali has quit [Quit: Client closed]
rishabhbali has joined #ste||ar
HHN has quit [Quit: Client closed]
<rishabhbali> Hello everyone, I am Rishabh Bali a third year undergrad at VJTI, Mumbai, I would love to contribute to Stellar Group in Gsoc 23. While going through Gsoc idea list of Stellar Group I found the "Study the performance of Halide applications running on HPX threads" very interesting. I have some experience with halide and have used it to perform basic
<rishabhbali> image processing operations. I have also built HPX from source and am currently exploring some of its examples. Can @rtohid please guide me on how to move forward with this project ?
<hkaiser> rishabhbali: welcome, I'm sure rtohid[m] will respond as soon as he sees your question
scofield_zliu has quit [Ping timeout: 268 seconds]
tufei__ has quit [Remote host closed the connection]
tufei__ has joined #ste||ar
K-ballo has quit [Ping timeout: 260 seconds]
K-ballo has joined #ste||ar
LouisP has quit [Quit: LouisP]
fares_atef has joined #ste||ar
tufei__ has quit [Remote host closed the connection]
tufei__ has joined #ste||ar
<fares_atef> hello while installing hpx,
<fares_atef> in cmake_gui he asks me where to build the binaries. what should i select ?
<fares_atef> anyplace i want ?
<fares_atef> here in the link he choosed folder build in the source code directory but i didnt find it.
<fares_atef> chose*
scofield_zliu has joined #ste||ar
<hkaiser> fares_atef: yes, some new directory anywhere in your file system
<hkaiser> I usually have a build directory under HPX' root
<fares_atef> ok, thanks <3
rishabhbali has quit [Quit: Client closed]
<fares_atef> another question
<fares_atef> while setting the three new environment variables( BOOST_ROOT, HWLOC_ROOT, CMAKE_INSTALL_PREFIX) in the (add cache entry)
<fares_atef> what should i write in the value field?
beojan has joined #ste||ar
<beojan> I've noticed that if I use the `--hpx:queuing=shared` option to enable a shared queue across hardware threads, my program crashes when I run it through mpirun with -n >= 2.
<beojan> I originally noticed this with my Gaudi port, but it also happens with my toy demo: https://github.com/beojan/HPXDemo
<beojan> Here's the error:
<beojan> {os-thread}: locality#1/worker-thread#1
<beojan> {thread-description}: <unknown>
<beojan> {state}: not running
<beojan> {auxinfo}:
<beojan> {file}: /home/beojan/Development/src/hpx/src/hpx-1.8.1/libs/core/schedulers/include/hpx/schedulers/thread_queue_mc.hpp
<beojan> {line}: 247
<beojan> {function}: thread_queue_mc::create_thread
<beojan> {what}: staged tasks must have 'pending' as their initial state: HPX(bad_parameter)
diehlpk_work has quit [Remote host closed the connection]
tufei__ has quit [Remote host closed the connection]
tufei__ has joined #ste||ar
youning has joined #ste||ar
youning has quit [Client Quit]
youning has joined #ste||ar
youning has quit [Client Quit]