K-ballo changed the topic of #ste||ar to: STE||AR: Systems Technology, Emergent Parallelism, and Algorithm Research | stellar.cct.lsu.edu | HPX: A cure for performance impaired parallel applications | github.com/STEllAR-GROUP/hpx | Buildbot: http://rostam.cct.lsu.edu/ | Log: http://irclog.cct.lsu.edu/
hkaiser has joined #ste||ar
hkaiser has quit [Quit: bye]
diehlpk_work_ has joined #ste||ar
diehlpk_work has quit [Ping timeout: 245 seconds]
<NewNickname[m]> is it me or something is going on with the internet worldwide ?
<rachitt_shah[m]> GitHub and stack overflow are down, due to CDN outage
<NewNickname[m]> github is working for me
nanmiao has quit [Quit: Connection closed]
hkaiser has joined #ste||ar
<hkaiser> NewNickname[m]: yt?
<NewNickname[m]> yes
<hkaiser> your nick is strange, btw
<hkaiser> anyways... have you seen the PR for copy?
<NewNickname[m]> I probably messed up sth
<NewNickname[m]> ahh no
<NewNickname[m]> oh! thanks!
<NewNickname[m]> let me put it in there
<mdiers[m]> hkaiser: ms Many thanks for #5117 !
<hkaiser> mdiers[m]: most welcome. I'm glad we caught this
<ms[m]> mdiers: yep, all the thanks goes to hkaiser!
<hkaiser> *blush* I just accidentally stumbled across the issue in a different context....
<hkaiser> NewNickname[m]: the copy PR was actually messed up (not sure how this happened, half of the changes were missing), but now it should be fine (tm)
<NewNickname[m]> hkaiser: cool didn't manage to check it yet either wya
<NewNickname[m]> way*
<NewNickname[m]> i will have an update in probably 30 minutes
<hkaiser> NewNickname[m]: ok, cool
<hkaiser> ms[m]: btw, I'm still not sure I understand the livetime management in #5374 :/
<ms[m]> hkaiser: :/ what's the confusing part?
<hkaiser> what is keeping the operation_state alive long enough for the future's continuation to run safely?
<hkaiser> ms[m]: ^^
<ms[m]> hkaiser: did you have a look at 2.2.5 here http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2020/p0443r14.html?
<hkaiser> yah sure
<ms[m]> specifically: "Once execution::start has been invoked, the caller shall ensure that the start of a non-exceptional invocation of one of the receiver’s completion-signalling operations strongly happens before [intro.multithread] the call to the operation_state destructor."
<hkaiser> but the future is 100% independent of the operation_state
<ms[m]> but the future is stored in the operation state, which implicitly gives it a reference count of at least 1, right?
<hkaiser> nod
<ms[m]> so it's not 100% independent...
<hkaiser> ms[m]: but does that ensure that th efuture becomes ready before the operation_state goes out of scope? or even that it manages to run the continuation before that?
<ms[m]> yes
<hkaiser> the continuation might be scheduled only, not run when the future becomes ready
<hkaiser> it might run later only
<ms[m]> the operation state cannot go out of scope before set_value is called on the receiver, which does not happen until the continuation runs
<ms[m]> that's ok
<hkaiser> ok
<ms[m]> if the continuation runs later, then set_value runs later as well, and then the operation state is also released later
<hkaiser> that's the part I missed
<ms[m]> ok, it's admittedly non-obvious, but since the current proposal specifically specifies that this is allowed I'd like to keep this behaviour
<ms[m]> however, if you think I can clarify this better in the comments please let me know
<hkaiser> ok
nanmiao has joined #ste||ar
NewNickname[m] is now known as gonidelis2[m]
gonidelis has joined #ste||ar
gonidelis is now known as gonidelis_freeno
gonidelis2[m] is now known as gonidelis[m]
<gonidelis[m]> apologies for the spam
gonidelis_freeno has quit [Client Quit]
<pedro_barbosa[m]> is there a way to launch kernels in HPXCL without passing arguments and using data in the GPU memory?
<pedro_barbosa[m]> * is there a way to launch kernels in HPXCL without passing arguments and instead use the data in the GPU memory?
diehlpk_work_ is now known as diehl_work
diehl_work is now known as diehlpk_work
diehlpk_work has quit [Changing host]
diehlpk_work has joined #ste||ar
hkaiser has quit [Quit: bye]
hkaiser has joined #ste||ar
<gonidelis[m]> hkaiser: .....
<gonidelis[m]> equally fast, if not better
<gonidelis[m]> what are you cooking there?
<gonidelis[m]> amazing
<gonidelis[m]> I suspect that in more than 8 cores (ROSTAM) our copy might get better
<gonidelis[m]> because it scales better
<hkaiser> gonidelis[m]: good
<gonidelis[m]> hkaiser: it's not just good. it's great!
<gonidelis[m]> thanks!!!
<hkaiser> gonidelis[m]: should be on par with memcpy
<gonidelis[m]> hkaiser: you mean that we coudl apply the same optimizations on memcopy?
<hkaiser> no
<gonidelis[m]> could/should ^^
<hkaiser> I meant that hpx::copy should give you the same speed as a plain memcpy for those types
<gonidelis[m]> oh ok
<gonidelis[m]> got it
<mdiers[m]> I have a special question: I am loading components via shared libraries at runtime. Is there a way to update the list of registered components (component server) after hpx::init()?
<mdiers[m]> * I have a special question: I am loading components via shared libraries at runtime. Is there a way to update the list of registered components (component server) after hpx::init()?
<mdiers[m]> What do I have to do to make a componente available when an HPX_REGISTER_COMPONENT()is executed after the hpx::init()?
<hkaiser> mdiers[m]: interesting problem
<mdiers[m]> hkaiser: Hmm, I would not have expected such an answer now 🤣
<hkaiser> sec
<mdiers[m]> hkaiser: Sounds like an idea
<mdiers[m]> * hkaiser: Sounds like an idea
<mdiers[m]> I stand there just unfortunately something on the hint as only end user
<hkaiser> mdiers[m]: I think this is currently not supported and would need some work
<hkaiser> I assume it doesn't work currently, otherwise you wouldn't ask
<hkaiser> mdiers[m]: the components are being registered here, currently (during startup), so this would need some additions/changes to support dynamic loading
<hkaiser> well, the link above has the code that discovers existing components and fills the internal registry with information about them
<gonidelis[m]> why do I get this `logging_destination` again?
<mdiers[m]> <hkaiser "mdiers: I think this is currentl"> It works as long as the shared libs are loaded before the hpx::init(). It is only important that all are kept. But as soon as the first one is removed again, hpx::init() gets some undefined state, because the dependencies are somehow messed up.
<gonidelis[m]> I am using `DHPX_WITH_LOGGING=OFF`
<hkaiser> mdiers[m]: could you create a small example demonstrating the issue?
<hkaiser> gonidelis[m]: uhh, it's fixed on the other PR only, just merge the two
<hkaiser> the other PR has not been merged to master yet
<gonidelis[m]> oh ok
<hkaiser> ms[m]: thanks for doing the RC1! much apreciated!
<mdiers[m]> <hkaiser "mdiers: could you create a small"> I can create. But it will be a bit bigger because of all the dependencies with libs. I hope then in that case also the boost::dll is compatible to windows for you.
<hkaiser> should be
<hkaiser> but pls keep it as small as possible, just the barebones
<hkaiser> you can also use https://github.com/STEllAR-GROUP/hpx/tree/master/libs/core/plugin for the dynamic loading (we do)
<ms[m]> hkaiser: 👍️
parsa has quit [Quit: Free ZNC ~ Powered by LunarBNC: https://LunarBNC.net]
parsa has joined #ste||ar
<hkaiser> ms[m]: we have received the first part of the GSoD money
parsa has quit [Quit: Free ZNC ~ Powered by LunarBNC: https://LunarBNC.net]
parsa has joined #ste||ar
nanmiao has quit [Ping timeout: 240 seconds]
<gonidelis[m]> hkaiser: can you please pm me?
<ms[m]> hkaiser: \o/ I'll have a look in the morning
nanmiao has joined #ste||ar
<ms[m]> I think I have an email with some instructions on what to do now
jaafar has quit [Quit: Konversation terminated!]
<hkaiser> ms[m]: cool
<diehlpk_work> hkaiser, ms[m] How is GSoD going?
<diehlpk_work> gonidelis[m], gnikunj[m] How is GSoC going?
<pedro_barbosa[m]> diehlpk_work: can you check PM pls
<diehlpk_work> pedro_barbosa[m], I do not see one
<pedro_barbosa[m]> what about now?
<diehlpk_work> pedro_barbosa[m], no
<pedro_barbosa[m]> can you try to send me one? idk what's going on but I usually can't send PMs that well for some reason
<gonidelis[m]> diehlpk_work: akhil is doing great
<gonidelis[m]> he is very productive
<hkaiser> diehlpk_work: srinivas has show very impressive results today, so GSoC is going incredibly well
parsa has quit [*.net *.split]
tiagofg[m] has quit [*.net *.split]
diehlpk_work has quit [*.net *.split]
hkaiser has quit [*.net *.split]
sivoais has quit [*.net *.split]
Vir has quit [*.net *.split]
parsa has joined #ste||ar
diehlpk_work has joined #ste||ar
hkaiser has joined #ste||ar
tiagofg[m] has joined #ste||ar
sivoais has joined #ste||ar
Vir has joined #ste||ar
hkaiser has quit [Ping timeout: 258 seconds]
hkaiser has joined #ste||ar