hkaiser changed the topic of #ste||ar to: STE||AR: Systems Technology, Emergent Parallelism, and Algorithm Research | stellar-group.org | HPX: A cure for performance impaired parallel applications | github.com/STEllAR-GROUP/hpx | This channel is logged: irclog.cct.lsu.edu
K-ballo has quit [Quit: K-ballo]
qiu has quit [Quit: Client closed]
<gonidelis[m]> K-ballo1: yt?
<gonidelis[m]> @anyone, Why is the log down for so long?
<gonidelis[m]> jedi18: yt?
<jedi18[m]> gonidelis yep
<gonidelis[m]> did you come up with a script to determine the iter category of the views?
<jedi18[m]> Nope...script? Weren't we going to determine that by just looking at the code?
<jedi18[m]> Could we automate it and use a script?
<gonidelis[m]> :)
<gonidelis[m]> that's why i summoned the dragon
<jedi18[m]> Ohh, well then let's wait for K-ballo's reply
<jedi18[m]> He's not online right now right?
<jedi18[m]> * wait for @K-ballo's reply
<gonidelis[m]> no he is not
<gonidelis[m]> i am just used to greece time
<gonidelis[m]> this is the code i came up with
<gonidelis[m]> He once showed me this AMAZING trick on how to get the iterator category by producing a compile time error
<gonidelis[m]> with an empty type
<jedi18[m]> gonidelis[m]: Yep you've mentioned it before
<gonidelis[m]> you remember!?
<gonidelis[m]> wtf
<gonidelis[m]> that was like on April
<jedi18[m]> Yeah I forgot but I saw it again while reading effective modern c++
<gonidelis[m]> great!
<gonidelis[m]> use that
<jedi18[m]> Btw we have the meeting today at 9:30 IST right?
<gonidelis[m]> it will take like 2 seconds
<gonidelis[m]> yes
<jedi18[m]> Ok
hkaiser has joined #ste||ar
Yorlik has joined #ste||ar
<dkaratza[m]> Where can I find the file which includes the main sections (like "why hpx?", "manual", "quickstart" etc ) of the docs? im trying to reorder them, so i need the file including them
<hkaiser> dkaratza[m]: is this what you're looking for: https://github.com/STEllAR-GROUP/hpx/blob/master/docs/sphinx/index.rst?
<dkaratza[m]> <hkaiser> "dkaratza: is this what you're..." <- great! thanxx
<dkaratza[m]> also, do we have a favicon already? I could create by cropping the image with our logo but later I found out that the previous documentation already had a favicon. Is it available anywhere? so that i use the same?
<dkaratza[m]> * could create one by cropping, * but later on I found
K-ballo has joined #ste||ar
<ms[m]> dkaratza: I don't think we have one, but I might be wrong
<ms[m]> where did you find the old one? wherever that is, you can probably get it from there...
<hkaiser> dkaratza[m]: I can give you the original vector image, you could scale that down
<hkaiser> sent
<dkaratza[m]> <hkaiser> "dkaratza: I can give you the..." <- Great, thank you!
Yorlik has quit [Ping timeout: 246 seconds]
hkaiser_ has joined #ste||ar
hkaiser has quit [Ping timeout: 264 seconds]
hkaiser has joined #ste||ar
hkaiser_ has quit [Ping timeout: 245 seconds]
hkaiser has quit [Ping timeout: 268 seconds]
hkaiser has joined #ste||ar
hkaiser has quit [Ping timeout: 268 seconds]
hkaiser has joined #ste||ar
Yorlik has joined #ste||ar
<gnikunj[m]> hkaiser rori srinivasyadav227 do we have a meeting today? Or do we do one next week when Hartmut is back?
<srinivasyadav227> yes, i think we should have a meeting today 😅. Need to discuss few things abt paper. :)
<rori[m]> 👍️
<gnikunj[m]> Sounds good. See you in 10 min!
<gonidelis[m]> K-ballo: now here?
<Yorlik> hkaiser: YT?
<hkaiser> Yorlik: a bit...
<Yorlik> hkaiser: Hi! Funny bit: We had to develop some code generation for C++, because of ICEs happening with the templating constructs. Months later - things start to work - seems the ICEs are fixed and we can go on using templates, as originally intended ... :D
<Yorlik> BTW: Is there any chance to get these Debug mode linking errors fixed any time soon-ish? RelWithDebInfo works, but I'd really like to have a pure Debug build too.
<hkaiser> Yorlik: I'm still travelling, haven't even looked at the issue yet
<Yorlik> hkaiser: IC. Good travelling then!
<hkaiser> thanks
<hkaiser> will be back end of the week
<Yorlik> Nice :). I hope to be around more now. I'm finally back into developing our project after almost a years interruption due to life changes.
<hkaiser> ok, cool
hkaiser has quit [Read error: Connection reset by peer]
hkaiser has joined #ste||ar
<pedro_barbosa[m]> Hey, I'm writing my thesis on the use of accelerators (GPUs) on local or remote machines with HPXCL and I've developed a heat diffusion example.... (full message at https://libera.ems.host/_matrix/media/r0/download/libera.chat/a0d024f6a42e3c46c20d7796a46e69c37357fcbd)
<hkaiser> pedro_barbosa[m]: have you tried using APEX?
<hkaiser> it's a perf-measurment library giving insights into HPX (and supports CUDA too)
<pedro_barbosa[m]> I haven't
<pedro_barbosa[m]> I can take a look at it
<hkaiser> ms[m]: can help with this, PatrickDiehl[m]1 as well
<pedro_barbosa[m]> Is this the correct APEX profiler?
<ms[m]> pedro_barbosa: https://uo-oaciss.github.io/apex/
<ms[m]> -DHPX_WITH_APEX=ON will download it for you though
<pedro_barbosa[m]> oh ok this makes a lot more sense
<pedro_barbosa[m]> will it work on HPXCL?
<ms[m]> And you probably want - DAPEX_WITH_OTF2=ON as well (which requires otf2 to be installed)
<ms[m]> That gives you task traces, but that's less useful when you're doing cuda
<ms[m]> Anything in hpxcl that is using hpx will use apex
<ms[m]> I've never used apex' cuda integration though so I can't comment on that
<ms[m]> Are your cuda kernels the same in the hpxcl version and the plain cuda version?
<ms[m]> Nvprof is another thing to look at which may be more useful than apex
<ms[m]> Or nvvp, whatever it's called nowadays
<pedro_barbosa[m]> Yes the kernels are the same in both versions
<ms[m]> That's nvidias profiler, which will tell you how long your kernels take, how many data transfers you have etc
<pedro_barbosa[m]> I've used nvprof already but according to it execution times only differ by a few milliseconds
<pedro_barbosa[m]> Which means the problems is with HPXCL launching the kernels
<ms[m]> Ok, I don't know how hpxcl does what it does so I don't what could be causing the differences
<ms[m]> The only thing I can think of to try to find out what's different is to compare the nvprof profiles to see if there's maybe some unnecessary synchronization going on
<ms[m]> Besides that you'll need diehlpk_work for more insight into hpxcl...
<pedro_barbosa[m]> Yeah I've used nvprof and couldn't really notice any significant difference between both profiles, I can take another look at it to see if I can figure out something new
<pedro_barbosa[m]> Is there any reason why gprof wouldn't work on HPXCL? I've tried using it and got some weird results
<pedro_barbosa[m]> Not sure if I did something wrong on that one
<ms[m]> pedro_barbosa: I don't know for sure, but gprof is cpu only, correct? There's a lot of other stuff going on when you use hpx (scheduler, mainly) which tends to obscure results when using a profiler that doesn't know about hpx
<pedro_barbosa[m]> Yeah I believe it is, and that makes sense, it was only measuring 0.17seconds of the total execution time which was definitely wrong
<pedro_barbosa[m]> I'm unsure if valgrind is going to be able to measure anything significant but it might be worth trying, I'll also take a look at APEX as kaiser mentioned before
<gnikunj[m]> hkaiser srinivasyadav227 rori they just rolled out a correction. Authors can present virtually too. srinivasyadav227 go for it ;)
<hkaiser> gnikunj[m]: +1
hkaiser_ has joined #ste||ar
hkaiser has quit [Ping timeout: 246 seconds]
Yorlik has quit [Ping timeout: 246 seconds]
hkaiser_ has quit [Quit: Bye!]