K-ballo changed the topic of #ste||ar to: STE||AR: Systems Technology, Emergent Parallelism, and Algorithm Research | stellar.cct.lsu.edu | HPX: A cure for performance impaired parallel applications | github.com/STEllAR-GROUP/hpx | Buildbot: http://rostam.cct.lsu.edu/ | Log: http://irclog.cct.lsu.edu/
k-ballo[m] has quit [Ping timeout: 240 seconds]
k-ballo[m] has joined #ste||ar
klaus[m] has quit [Ping timeout: 268 seconds]
klaus[m] has joined #ste||ar
hkaiser has joined #ste||ar
K-ballo has quit [Quit: K-ballo]
hkaiser has quit [Quit: bye]
diehlpk_work has quit [Remote host closed the connection]
bita has quit [Ping timeout: 240 seconds]
jehelset has joined #ste||ar
<srinivasyadav227> gonidelis[m]: yes I am there, I am sorry, I had exams, just finished today, resuming now
<gnikunj[m]> gonidelis[m]: yt?
<gonidelis[m]> gnikunj[m] hey
<gonidelis[m]> freenode_srinivasyadav227[m] hey... I was going to ask you about a pending PR ment for gsoc students but i think jedi18 has already made one
<srinivasyadav227> gonidelis[m]: I just started 2 hrs back (delay because of my college internals exams) , I think today I will make one PR soon, working on it now 🙂
jehelset has quit [Ping timeout: 260 seconds]
<gnikunj[m]> gonidelis[m] I updated gsod project. Would you like to mentor it with me?
<gnikunj[m]> (The one we were talking about in yesterday's meeting)
<ms[m]> srinivasyadav227: there are plenty of other algorithms that need the tag_invoke to tag_fallback_invoke conversion, you can also have a go at one
<ms[m]> just coordinate with jedi18[m] that you don't do the same ones
<srinivasyadav227> ms: yes, im working on copy.hpp now
<ms[m]> 👍️
<srinivasyadav227> :)
<srinivasyadav227> I forked hpx, and created a seperate branch for copy algorithm called segmented_copy, now tests that are running on my circle.ci are only partial or complete? if partial should I create a PR against my master, just want to make sure if tests are passing on my circle.ci before creating a PR against hpx master
K-ballo has joined #ste||ar
jehelset has joined #ste||ar
newUser has joined #ste||ar
NewUser84 has joined #ste||ar
newUser has quit [Quit: Connection closed]
<ms[m]> srinivasyadav227: thanks for the PR! I just want to make sure: relying on ci to check that everything is ok is a fair approach, but part of the idea of getting you applicants to open PRs is that you first try building hpx locally, so if you haven't done that I highly recommend doing that (not implying you haven't, just wasn't clear from your messages if you have)
hkaiser has joined #ste||ar
<srinivasyadav227> ms[m]: I have build hpx locally on 3 machines (Nvidia dgx station, normal linux machine and Mac OS), then went through docs, hkaiser cpp con talks, cscs workshop videos and worked with few examples(one week back), I was busy with my exams so was not active in between, so today I forked the repo and created branches and PRs against my master and HPX master. :)
<hkaiser> srinivasyadav227: bravo, you mean it!
<ms[m]> srinivasyadav227: perfect, was just checking :)
<srinivasyadav227> hkaiser: sorry, I didn't understand..
<hkaiser> srinivasyadav227: thanks for the PR
<srinivasyadav227> hkaiser: oh, got it..thanks, 😀
<jedi18[m]1> ms[m]: Just wanted to clarify, by retest you mean I should run the tests which are failing in the CI locally?
<ms[m]> jedi18[m]: no, don't worry, that wasn't for you :D that's for jenkins, another ci system that we use
<ms[m]> same on srinivasyadav227's pr
<jedi18[m]1> Ah right ok
<ms[m]> you two are just not on our whitelist (yet?) to have builds run automatically, so we trigger them manually
<srinivasyadav227> when I created PR against my master branch, only one WORK or one platform or set of jobs were running, but PR against HPX master, number of platforms were running, ? so is this default, is there any way I can change so that tests against all platforms happen for my circle.ci account or against my PR
<hkaiser> you have to create a PR for our test system to kick in
<srinivasyadav227> hkaiser: oh ok, so if some other person also creates PR, then do they run in parallel?
<hkaiser> jedi18[m]1, srinivasyadav227, please make sure you really understand the changes you proposed - please ask questions, if necessary
<hkaiser> srinivasyadav227: yes
<srinivasyadav227> hkaiser: ok :)
<jedi18[m]1> Thanks! Yeah tbh it's bugging me a lot that I don't understand most of the changes in the PR, I'm going through https://www.youtube.com/watch?v=T_bijOA1jts&feature=emb_logo which will hopefully help
<hkaiser> jedi18[m]1: https://www.bfgroup.xyz/duck_invoke/ is a nice introduction to tag_invoke, I'd suggest you study the code, tag_fallback_invoke is an extension of that idea that adds another layer of abstraction, allowing additional compile time criteria to be used to decide which overload is chosen
<gnikunj[m]> ms: yt?
<srinivasyadav227> jedi18: thanks for sharing the video, helps a lot!
<hkaiser> there is also wg21.link/p1895 for the bone dry factology
<jedi18[m]1> hkaiser: Thanks a lot! I'll read up on those
<ms[m]> gnikunj: type away, I'll be back in a bit
<hkaiser> ms[m]: kokkos call today?
<gnikunj[m]> ms: I was wondering if it's possible to have something like this working: https://gist.github.com/NK-Nikunj/99c8f5598eae5434b45be5c5a73ffe9a taking inspiration from (https://github.com/STEllAR-GROUP/hpx-kokkos/blob/master/src/hpx/kokkos/executors.hpp#L72)
<ms[m]> hkaiser: not planned, but also not not planned
<hkaiser> ok
<ms[m]> we can definitely do one, it'll merge with the hpx meeting anyway
<hkaiser> I might be late for that
<gnikunj[m]> ms: if you have a call? Can I join as well?
<ms[m]> but I'm not off by an hour am I?
<ms[m]> gnikunj: absolutely
<hkaiser> no, it's still early
<ms[m]> same number just 30-ish minutes before the hpx call
<gnikunj[m]> thanks!
<ms[m]> i.e. in about an hour?
<hkaiser> yes
NewUser84 has left #ste||ar [#ste||ar]
hkaiser has quit [Quit: bye]
<ms[m]> gnikunj: I assume that doesn't compile if you're asking? what does the compiler tell you?
<gnikunj[m]> it compiles but it doesn't run as expected. If you make it device only, then it won't compile saying: reference to device function 'foo' in host function
<ms[m]> right, that's ok
<ms[m]> how does it not run as expected?
<gnikunj[m]> I mean not as expected for me :D
<gnikunj[m]> it is the expected CUDA behavior
<gnikunj[m]> how do I pass an invocable that I can later invoke within device?
<ms[m]> either make it a lambda or a struct with a call operator, I think function pointers are tricky exactly because of what you said (not sure if impossible, but at least not the recommended way)
<gnikunj[m]> aah! That should work yes. Let me try that.
<gnikunj[m]> ms: are you having a meeting today too?
<ms[m]> kokkos and hpx, yes
<srinivasyadav227> <gnikunj[m] "it compiles but it doesn't run a"> yes, in CUDA if a function is declared with __device__ only other __device__ functions can call it (and couple of modifications depending on compute capability), hope this is related only :)
diehlpk_work has joined #ste||ar
nanmiao has joined #ste||ar
jehelset has quit [Remote host closed the connection]
K-ballo has quit [Read error: Connection reset by peer]
K-ballo has joined #ste||ar
<ms[m]> did you get the invitation for the meeting with christian?
<gnikunj[m]> srinivasyadav227 yes, I'm aware of it. I wasn't trying to invoke a device function from host either. It was about forwarding device function from host so that I could invoke it within another device function.
<gnikunj[m]> ms I haven't received the invitation. Can I attend the meeting as well?
<ms[m]> gnikunj: sorry, brainfart, will add you
<ms[m]> thanks for telling me
<gnikunj[m]> Thanks!
bita has joined #ste||ar
<ms[m]> gnikunj see pm
hkaiser has joined #ste||ar
diehlpk_work_ has joined #ste||ar
parsa has quit [Quit: Free ZNC ~ Powered by LunarBNC: https://LunarBNC.net]
diehlpk_work has quit [Remote host closed the connection]
beauty2 has quit [Write error: Connection reset by peer]
jaafar_ has joined #ste||ar
beauty2 has joined #ste||ar
parsa has joined #ste||ar
jaafar has quit [Remote host closed the connection]
mdiers[m] has quit [Ping timeout: 246 seconds]
pedro_barbosa[m] has quit [Ping timeout: 265 seconds]
tiagofg[m] has quit [Ping timeout: 265 seconds]
k-ballo[m] has quit [Ping timeout: 240 seconds]
gonidelis[m] has quit [Ping timeout: 240 seconds]
ms[m] has quit [Ping timeout: 240 seconds]
heller has quit [Ping timeout: 246 seconds]
jpinto[m] has quit [Ping timeout: 240 seconds]
gnikunj[m] has quit [Ping timeout: 240 seconds]
klaus[m] has quit [Ping timeout: 246 seconds]
tarzeau has quit [Write error: Broken pipe]
tarzeau has joined #ste||ar
tid_the_harveste has quit [Ping timeout: 258 seconds]
jedi18[m]1 has quit [Ping timeout: 258 seconds]
rori has quit [Ping timeout: 258 seconds]
srinivasyadav227 has quit [Ping timeout: 268 seconds]
vroni[m] has quit [Ping timeout: 265 seconds]
pedro_barbosa[m] has joined #ste||ar
gnikunj[m] has joined #ste||ar
k-ballo[m] has joined #ste||ar
gonidelis[m] has joined #ste||ar
tiagofg[m] has joined #ste||ar
mdiers[m] has joined #ste||ar
heller1 has joined #ste||ar
jpinto[m] has joined #ste||ar
ms[m] has joined #ste||ar
klaus[m] has joined #ste||ar
M1ck3y has joined #ste||ar
vroni[m] has joined #ste||ar
rori has joined #ste||ar
jedi18[m]1 has joined #ste||ar
srinivasyadav227 has joined #ste||ar
tid_the_harveste has joined #ste||ar
parsa has quit [Quit: Free ZNC ~ Powered by LunarBNC: https://LunarBNC.net]
parsa has joined #ste||ar
<bita> hkaiser, will you join the meeting?
<hkaiser> yep
M1ck3y has quit [Quit: Connection closed]
<hkaiser> I directly modified the vcall example to use LLVM instead of CUDA
<hkaiser> bita: also, in the enoki-python project, you'll need to edit the library python39_d.lib to python39.lib (except if you have the debug Python libraries installed
<hkaiser> but, as said, the example reports a heap corruption
<bita> Thank you hkaiser
hkaiser has quit [Quit: bye]