nanmiao11 has quit [Remote host closed the connection]
akheir has quit [Quit: Leaving]
kale[m] has quit [Ping timeout: 240 seconds]
kale[m] has joined #ste||ar
kale[m] has quit [Ping timeout: 260 seconds]
kale[m] has joined #ste||ar
weilewei has quit [Remote host closed the connection]
kale[m] has quit [Ping timeout: 256 seconds]
kale[m] has joined #ste||ar
hkaiser has joined #ste||ar
nikunj97 has joined #ste||ar
nanmiao11 has joined #ste||ar
<ms[m]>
thanks hkaiser for looking into the stencil issues! the split gids one is also nice, because afair that one has showed up on some of our tests as well, so with a bit of luck some of those will be fixed as well
<nikunj97>
hkaiser, I'm working on single point of entry and further specialization based on predicates (provided/not provided), voting function (provided/not provided) etc.
<hkaiser>
nikunj97: right, cpos make that very clean
kale[m] has quit [Read error: Connection reset by peer]
kale[m] has joined #ste||ar
karame_ has joined #ste||ar
diehlpk__ has joined #ste||ar
kale[m] has quit [Ping timeout: 256 seconds]
kale[m] has joined #ste||ar
kale[m] has quit [Ping timeout: 260 seconds]
diehlpk__ has quit [Remote host closed the connection]
diehlpk__ has joined #ste||ar
<nikunj97>
hkaiser, I am unable to understand how tag_invoke will make the implementation easier. I understand how it will make the code cleaner and how CPO are implemented, but I can't quite understand how multiple argument analysis can be made a thing.
<nikunj97>
Is there a basic example that I can look into?
<hkaiser>
nikunj97: as said, the implementation might not get any simpler, it's the interface that is cleaner
<hkaiser>
the copy algorithm I linked to is an example
<nikunj97>
that's what I feel as well
<nikunj97>
anyway. I'll work on it tonight and see if I can make progress.
<hkaiser>
cool
diehlpk__ has quit [Ping timeout: 260 seconds]
<hkaiser>
ms[m]: I'm seeing 'The header hpx/include/iostreams.hpp is deprecated, please include hpx/distributed/iostream.hpp instead'
<hkaiser>
shouldn't this refer to hpx/iostreams.hpp instead?
weilewei has joined #ste||ar
<ms[m]>
hkaiser: we added hpx/distributed/iostream.hpp a few prs ago and it's distributed simply to leave room for a local hpx/iostream.hpp
<hkaiser>
ok
<ms[m]>
unrelated question, do you have access to the stellarbot github account? I currently set up tokens on my own account for jenkins, but it would be nice to use that account instead for setting jenkins statuses
<hkaiser>
ms[m]: I think Thomas should have access
<ms[m]>
makes sense to you or would you prefer hpx/iostream.hpp?
<ms[m]>
ok, thanks, will ask him
<hkaiser>
ms[m]: the question is whether we want to have two separate headers or just one that includes the local version and optionally the distributed version as well
<hkaiser>
I think having one header that includes both, the local and possibly the distributed might be preferrable
<hkaiser>
other users will have to decide which one they need/want
<hkaiser>
otherwise*
<ms[m]>
I'd say the hpx/distributed headers would include local as well, the hpx/ headers would include local only?
<ms[m]>
at least that's what the plan is now with hpx/future.hpp etc.
<hkaiser>
ok, that's the other way around
<hkaiser>
in this case users that want to switch from local to distributed operation will have to change their code and not only the configuration
<hkaiser>
if we (optionally) include the distributed part into the hpx/<header> changing the configuration would do the trick
<ms[m]>
hmm, be back in a bit
<ms[m]>
they'd have to change their code anyway going from local to distributed? and when they're distributed they have a hard dependency on distributed hpx?
<hkaiser>
ok
<ms[m]>
also having hpx/x include both means that local-only users won't be able to include local-only features (except if we add hpx/local/x instead)
<hkaiser>
I'd be (slightly) in favor of giving the user one header instead of two in hpx/<header>
<ms[m]>
let's discuss this properly still before the release then, because right now we're doing the opposite of that
<ms[m]>
gtg now, call tomorrow?
<hkaiser>
ms[m]: ok
bita_ has joined #ste||ar
<nikunj97>
hkaiser, I'm giving up for now. I'll add the distributed functionality using a different name for now. Once we have things working, I'll refactor it. Otherwise I'll scratch my head for days.
weilewei has quit [Remote host closed the connection]
<hkaiser>
nikunj97: I can help
<nikunj97>
hkaiser, so here's the thing. There is a single access point of entry to the function: template <typename... Params>
<nikunj97>
from here, I decide to get a decayed version using: auto helper = detail::make_async_replay_helper<Params>::get_decay_ptr(std::forward<Params>(param...));
<nikunj97>
I'm having trouble specializing make_async_replay_helper
<nikunj97>
the variadic template can be of the form, (predicate, function, function arguments) or (function, function arguments)
karame_ has quit [Remote host closed the connection]
<nikunj97>
while specializing, I can't exactly find the partial specialization that will differentiate the two
nanmiao11 has quit [Remote host closed the connection]
diehlpk__ has joined #ste||ar
<nikunj97>
I can't use std::is_invocable coz both predicate and function are invocable
<nikunj97>
function is not an action, so I can't use hpx traits on it either
weilewei has joined #ste||ar
nanmiao11 has joined #ste||ar
<hkaiser>
nikunj97: yah, that's a problem
<hkaiser>
do these functions have the same prototype?
<nikunj97>
sadly yes
<nikunj97>
hkaiser, the return types are the same for predicate and function
<nikunj97>
wait no. The predicate returns a bool. Sadly, a function can also return a bool type.
<nikunj97>
So we can't really differentiate based on the prototypes
diehlpk__ has quit [Ping timeout: 260 seconds]
<gonidelis[m]>
hkaiser: Could you explain what `cmake/tests/cxx20_no_unique_address_attribute.cpp` us doing?
<gonidelis[m]>
is ^^
<K-ballo>
it should be checking for c++'s 20 [[no_unique_address]], what is it actually doing?
<gonidelis[m]>
So this `[[no_unique_address]]` thing is checking wether a variable is an empty type and just avoids using memory for this specific variable....
<gonidelis[m]>
As far as I understand.
<K-ballo>
something like that, yes
<gonidelis[m]>
Ok and what about this `in_out_result`. I don't quite get it
<K-ballo>
ah, I missed that.. I understood the spec linked into hpx code and that made absolutely no sense :P
<K-ballo>
so yeah, what's the question?
<gonidelis[m]>
What is it doing?
<K-ballo>
nothing
<K-ballo>
it doesn't have any invariants nor responsibilities
<gonidelis[m]>
it's like it takes `I` and `O` types and it returns `in` and `out` ?
<K-ballo>
it's a struct.. a struct with members `in` and `out`.. meant to be used as result type for algorithms
<gonidelis[m]>
hmm... ok
<gonidelis[m]>
can't really distinct the usefulness on that but let's accept it the way it is
<K-ballo>
what are the alternatives?
<gonidelis[m]>
no alternatives, that's true :)
<gonidelis[m]>
it's the standard after all
<K-ballo>
no, I mean, what alternative did the standard had?
<gonidelis[m]>
oh... I don't know. How could search for them?
<K-ballo>
I mean, what alternative do you think there is to return two things
<gonidelis[m]>
I have no idea. If it's that trivial then why create a separate struct for that....
<K-ballo>
what makes you think it is trivial?
<K-ballo>
how would you design an algorithm that returns two pieces of information?
<gonidelis[m]>
just create a struct that returns two things, name it `return_two_things` and then use it whenever you want to return two things (??)
<K-ballo>
that sounds like std::pair, we already made that mistake in the past
<gonidelis[m]>
The fact that it is doing nothing as you mentioned above makes me think that it is trivial
<K-ballo>
like why not std::pair<I, O> copy(...) { return {in, out}; } ?
<gonidelis[m]>
hmmm.... didn't know that. I saw that hkaiser fixed it on this PR. Why was it a mistake after all?
<K-ballo>
hah, were we actually returning std::pair before? :P
<K-ballo>
because you don't know what's on .first and what's on .second without getting the information from the documentation, and plastering it in the code in comments
<K-ballo>
with .in and .out it's obvious what's what
<K-ballo>
for a brief time ranges had a tagged pair/tuple, which augmented a pair/tuple with tags per element, so once could say std::get<tag::in>(r) instead of std::get<0>(r)
<K-ballo>
some of those were tagged_pairs, some of those were plain pairs, odd
<K-ballo>
tagged_pair is what experimental::ranges had
<hkaiser>
K-ballo: yes, I'm removing those now step by step
<gonidelis[m]>
hkaiser: btw that #4821 PR is great! Bravo!
<hkaiser>
K-ballo: we were using std::pair internally but turned that into a proper tagged_pair on the interface
<K-ballo>
ah ok, that makes sense
<hkaiser>
gonidelis[m]: would you be interested in doing the same for for_each?
<gonidelis[m]>
for sure...
<hkaiser>
great!
<gonidelis[m]>
comforming to c++20 is the Holy Grail after all...
<gonidelis[m]>
Do you think that it would be a good idea to work on that, in parallel with the algorithm adaptations ?
<gonidelis[m]>
hkaiser: ^^
<hkaiser>
gonidelis[m]: I think this _is_ algorithm adaptation ;-)
<hkaiser>
so it's 100% aligned with gsoc, I think - would you agree?
<gonidelis[m]>
was going to say that these two jobs are closely related, but your statement for sure fits better in that case
<gonidelis[m]>
I am working on `transform` right now but I will give for_each a try while I am doing transform
<hkaiser>
gonidelis[m]: no rush - transform is fine - you might want to create the cpos while you're at it there ;-)
<gonidelis[m]>
hm... yeah that sounds better. Let me try it...
<gonidelis[m]>
Just to be clear: the ultimate goal for each algo is to create all the existing overloads that are defined on the c++20 standard. Right?
<gonidelis[m]>
hkaiser:
<hkaiser>
yes
<gonidelis[m]>
great... let's do that :)))
<gonidelis[m]>
(sth tells me that I am going to live in the world of rabitholes for quite some time ;p)
<hkaiser>
gonidelis[m]: no rush, rabbitholes are cosy
<gonidelis[m]>
ok so that's like, I use unary/binary_transform_result as is, except when I use it for the algorith result, which in that case it should be transform_result
<hkaiser>
gonidelis[m]: also note that I left the algorithms in hpx::parallel:: untouched, they just got a HPX_DEPRECATED()
<gonidelis[m]>
!!! forget what I said above ^^
<hkaiser>
using ranges::binary_transform_result = util::in1_in2_out_result< I1, I2, O >;
<gonidelis[m]>
yeah yeah... sorry
<gonidelis[m]>
just saw that
<gonidelis[m]>
`HPX_DEPRECATED() ` ??
<hkaiser>
yah
<hkaiser>
so after your changes we have 3 sets of algorithms
<hkaiser>
hpx::parallel: those are the old ones - leave untouched except for the deprecation warning
<gonidelis[m]>
Can't really find deprecation thing on your pr....
<gonidelis[m]>
:/
<hkaiser>
hpx:: those should conform to std::, and hpx::ranges:: should conform to std::ranges
<hkaiser>
also note, that hpx::ranges:: will expose paralle algorithms (on top of what std:;ranges has