<ms[m]>
rc willl hopefully be done this week, but it can also be merged in the next few weeks before the final release
<Nikunj__>
ms[m], crap I forgot. Give me today. I'll make changes for sure.
<ms[m]>
no problem :) thanks!
Nikunj__ is now known as nikunj97
<ms[m]>
hkaiser: hey, I wanted to move the kokkos/hpx repo to STEllAR-GROUP... would you like it to be hpxkokkos (or hpxk... please no) to fit with the other repos, or can I just name it hpx-kokkos? :P
<hkaiser>
as you like it, I don't mind
<hkaiser>
ms[m]: you should have full access to do that
<ms[m]>
thanks, yeah, I think I do have the rights for that (just wanted to check if there was some sort of convention for the names)
<hkaiser>
ms[m]: there are no conventions, afaict
<nikunj97>
hkaiser, btw I was thinking to leave replicate's invocation localities to the user himself
<nikunj97>
this way, we only take care of the invocation part and we don't need to explicitly worry about load balancing
<nikunj97>
the user will provide the vector<hpx::id_type> and we'll invoke it on all the provided localities
<hkaiser>
nikunj97: the invocation part can be handled by an executor
<nikunj97>
yes
<hkaiser>
(and should be)
<nikunj97>
but the executor should know the list of localities. That will come from the user (instead of us)
<nikunj97>
that's what I meant.
<hkaiser>
so no std::vector<id_type>, just perhaps a distributed_executor(std::vector<id_type>)
<nikunj97>
yes, right.
<hkaiser>
or it can get it from a distribution policy, we have such an executor
<nikunj97>
yes. But previously I was taking only 1 locality from the user with replicate and then relying on node affinities to find the neighbors. Now what I'm suggesting is to get all localities on which the user wants to invoke the functions and use executor to invoke it.
<hkaiser>
yes
<nikunj97>
so with replication of 3, the user will provide a vector of localities of size 3
<hkaiser>
the user has to provide the policy that decides on the localities to use
<nikunj97>
yes
klaus[m] has quit [*.net *.split]
rori has quit [*.net *.split]
gdaiss[m] has quit [Remote host closed the connection]
gretax[m] has quit [Remote host closed the connection]
toralf[m]1 has quit [Write error: Connection reset by peer]
mariella[m] has quit [Write error: Connection reset by peer]
freifrau_von_ble has quit [Read error: Connection reset by peer]
ms[m] has quit [Read error: Connection reset by peer]
noise[m] has quit [Read error: Connection reset by peer]
amspina[m] has quit [Remote host closed the connection]
alexandros[m] has quit [Write error: Connection reset by peer]
gonidelis[m] has quit [Remote host closed the connection]
ben[m] has quit [Read error: Connection reset by peer]
vroni[m] has quit [Read error: Connection reset by peer]
oscar[m] has quit [Write error: Connection reset by peer]
wladimir[m] has quit [Write error: Connection reset by peer]
diehlpk_mobile[m has quit [Remote host closed the connection]
mathegenie[m] has quit [Remote host closed the connection]
tiagofg[m] has quit [Remote host closed the connection]
chelsea[m] has quit [Write error: Connection reset by peer]
fred[m]1 has quit [Read error: Connection reset by peer]
jbjnr1 has quit [Remote host closed the connection]
heller1 has quit [Remote host closed the connection]
nikunj has quit [Ping timeout: 246 seconds]
nikunj has joined #ste||ar
mariella[m] has joined #ste||ar
jbjnr has joined #ste||ar
oscar[m] has joined #ste||ar
rori has joined #ste||ar
richard[m] has joined #ste||ar
alexandros[m] has joined #ste||ar
chelsea[m] has joined #ste||ar
toralf[m] has joined #ste||ar
diehlpk_mobile[m has joined #ste||ar
fred[m] has joined #ste||ar
ms[m] has joined #ste||ar
noise[m] has joined #ste||ar
wladimir[m] has joined #ste||ar
joe[m]1 has joined #ste||ar
neill[m] has joined #ste||ar
amspina[m] has joined #ste||ar
gdaiss[m] has joined #ste||ar
ralph[m] has joined #ste||ar
gretax[m] has joined #ste||ar
oleg[m] has joined #ste||ar
freifrau_von_ble has joined #ste||ar
vroni[m] has joined #ste||ar
tiagofg[m] has joined #ste||ar
klaus[m] has joined #ste||ar
heller1 has joined #ste||ar
mathegenie[m] has joined #ste||ar
ben[m]1 has joined #ste||ar
gonidelis[m] has joined #ste||ar
nickrobison has joined #ste||ar
<nickrobison>
Good morning folks! I've been looking at the template_accumulator example in the HPX repo. Seems pretty straightforward, but not sure how to expand it to support a class with multiple template parameters. Do each of the template type names need to be appended to the name using the underscore delimiter?
kale[m] has joined #ste||ar
<hkaiser>
nickrobison: the name must be unique
<hkaiser>
nickrobison: with c++17 we will be able to generate those names, for now you'll need to take care of this yourself :/
<nickrobison>
Ok. Here's my attempt at expanding the registration macro from the example;
<gonidelis[m]>
The author reasons the need for that in two points. The one suggests that:
<gonidelis[m]>
Suppose that a future version of std::begin requires that its argument model a Range concept. Adding such a constraint would have no effect on code that uses std::begin idiomatically:
<gonidelis[m]>
`using std::begin;
<gonidelis[m]>
begin(a);`
<gonidelis[m]>
If the call to begin dispatches to a user-defined overload, then the constraint on std::begin has been bypassed.
<gonidelis[m]>
Why is the last statement true? Why is the constraint bypassed when calling unqualifying names?
<K-ballo>
ADL, are you familiar with it?
<K-ballo>
ADL would look for `begin` overloads in the associated namespaces of `a`
nikunj97 has joined #ste||ar
<gonidelis[m]>
K-ballo: yeah I know what ADL is. wow that makes sense indeed
<gonidelis[m]>
So when we use CPO's the compiler is "forced" to look into the constraints... is that right?
<gonidelis[m]>
Just because we overload the call operator ??
<K-ballo>
somewhat, more like ADL doesn't happen
<K-ballo>
ADL only kicks in for unqualified function calls, a function object is not that
<K-ballo>
begin(a); where begin is a CPO is effectively begin.operator()(a); qualified on begin
weilewei has joined #ste||ar
<gonidelis[m]>
ahhhhhh....... now i see!! So the trick is that their function objects after all! So the compiler HAS to go through the begin CPO
<gonidelis[m]>
K-ballo: ^^
<gonidelis[m]>
they are *
<K-ballo>
yes, the object "kills" ADL
<gonidelis[m]>
Thank you so much... great explanation!
<gonidelis[m]>
Any ideas what is the corresponding of these traits in the standard?
<nikunj97>
hkaiser, I talked to parsa about the load balancing stuff and I have a few ideas.
<nikunj97>
I see 2 methods of getting distributed resiliency done. One, the user decides where the next replay happens and the one after and so on. Same for replicate. Two, the user provides one locality (similar to async calls) and we use performance counters to identify which nodes are idle and invoke it there.
<nikunj97>
btw thanks for woorking on cpos.
weilewei has joined #ste||ar
Nikunj__ has joined #ste||ar
nikunj97 has quit [Ping timeout: 256 seconds]
Nikunj__ has quit [Quit: Leaving]
<hkaiser>
nikunj: both options can be implemented using executors, so let's build a generic API
<nikunj>
hkaiser: on it :D
<hkaiser>
nikunj: I have an executor version of async_replay working here ;-)
<gonidelis[m]>
K-ballo: There were these issues were we couldn't call functions by reference (using `&`)
<gonidelis[m]>
So you suggested that we should remove decay
<gonidelis[m]>
the thing is I don't quite get what transform_iteration does so I don't get if we need to remove the decay there
<K-ballo>
that one was decay<>&, both decay by itself and & by itself can make sense
nikunj97 has joined #ste||ar
nanmiao11 has quit [Remote host closed the connection]
weilewei has quit [Remote host closed the connection]
weilewei has joined #ste||ar
nikunj97 has quit [Remote host closed the connection]
<weilewei>
hkaiser got some apex profiling result, see emails... though I haven't really looked into what does these infomation mean
<gonidelis[m]>
K-ballo: my bad. Thanks ;)
nanmiao11 has joined #ste||ar
<hkaiser>
weilewei: nice
<nanmiao11>
In CircleCI, my branch build fails with the error "/phylanx/src/src/execution_tree/primitives/assert_condition.cpp:10:10: fatal error: 'hpx/distributed/iostream.hpp' file not found
<hkaiser>
nanmiao11: sorry for that - we've just changed it back to hpx/iostreams.hpp - could you adapt all of Phylanx to use that instead, please?
<hkaiser>
(top of HPX master, that is)
<hkaiser>
I meant hpx/iostream.hpp
<nanmiao11>
Chang "hpx/distributed/iostream.hpp" back to "hpx/iostreams.hpp" ?