hkaiser changed the topic of #ste||ar to: STE||AR: Systems Technology, Emergent Parallelism, and Algorithm Research | stellar.cct.lsu.edu | HPX: A cure for performance impaired parallel applications | github.com/STEllAR-GROUP/hpx | Buildbot: http://rostam.cct.lsu.edu/ | Log: http://irclog.cct.lsu.edu/ | GSoC: https://github.com/STEllAR-GROUP/hpx/wiki/Google-Summer-of-Code-%28GSoC%29-2020
K-ballo has joined #ste||ar
nanmiao11 has joined #ste||ar
kale[m] has quit [Ping timeout: 260 seconds]
kale[m] has joined #ste||ar
kale[m] has quit [Ping timeout: 260 seconds]
hkaiser has quit [Quit: bye]
kale[m] has joined #ste||ar
nanmiao11 has quit [Remote host closed the connection]
akheir has quit [Quit: Leaving]
kale[m] has quit [Ping timeout: 240 seconds]
kale[m] has joined #ste||ar
kale[m] has quit [Ping timeout: 260 seconds]
kale[m] has joined #ste||ar
weilewei has quit [Remote host closed the connection]
kale[m] has quit [Ping timeout: 256 seconds]
kale[m] has joined #ste||ar
hkaiser has joined #ste||ar
nikunj97 has joined #ste||ar
nanmiao11 has joined #ste||ar
<ms[m]> thanks hkaiser for looking into the stencil issues! the split gids one is also nice, because afair that one has showed up on some of our tests as well, so with a bit of luck some of those will be fixed as well
<hkaiser> ms[m]: yes, I hope so as well!
<hkaiser> I think that one is good to go
<ms[m]> hkaiser: yep, almost
<ms[m]> mind if I push?
<hkaiser> pls do
<hkaiser> arg, release build
<hkaiser> thanks for spotting
<hkaiser> msvc doesn't report this
<nikunj97> hkaiser, started with the resiliency refactor today. Refactoring is more difficult than I expected :/
<nikunj97> enable_if with functions seemed easier (though its not recommended :D)
<hkaiser> nikunj97: I did some work with customization points lately and I think using those would be superior to what we have today
<nikunj97> hkaiser, links pls
<hkaiser> it wouldn't change much in terms of underlying implementation, but it cleans up the dispatching interface
<hkaiser> wg21.link/p1895, also https://github.com/STEllAR-GROUP/hpx/pull/4821
<nikunj97> hkaiser, I'm working on single point of entry and further specialization based on predicates (provided/not provided), voting function (provided/not provided) etc.
<hkaiser> nikunj97: right, cpos make that very clean
akheir has joined #ste||ar
<hkaiser> I'm considering redoing our async apis (and similar)
diehlpk__ has joined #ste||ar
kale[m] has quit [Ping timeout: 258 seconds]
kale[m] has joined #ste||ar
<ms[m]> jenkins should now build prs merged to master if possible, please ping me if it looks like things are broken
<hkaiser> ms[m]: nice!
<nikunj97> saw this talk: https://www.youtube.com/watch?v=vJ290qlAbbw Ranges in C++20 is really cool
<hkaiser> nikunj97: yes
kale[m] has quit [Ping timeout: 246 seconds]
diehlpk__ has quit [Ping timeout: 260 seconds]
kale[m] has joined #ste||ar
kale[m] has quit [Read error: Connection reset by peer]
kale[m] has joined #ste||ar
karame_ has joined #ste||ar
diehlpk__ has joined #ste||ar
kale[m] has quit [Ping timeout: 256 seconds]
kale[m] has joined #ste||ar
kale[m] has quit [Ping timeout: 260 seconds]
diehlpk__ has quit [Remote host closed the connection]
diehlpk__ has joined #ste||ar
<nikunj97> hkaiser, I am unable to understand how tag_invoke will make the implementation easier. I understand how it will make the code cleaner and how CPO are implemented, but I can't quite understand how multiple argument analysis can be made a thing.
<nikunj97> Is there a basic example that I can look into?
<hkaiser> nikunj97: as said, the implementation might not get any simpler, it's the interface that is cleaner
<hkaiser> the copy algorithm I linked to is an example
<nikunj97> that's what I feel as well
<nikunj97> anyway. I'll work on it tonight and see if I can make progress.
<hkaiser> cool
diehlpk__ has quit [Ping timeout: 260 seconds]
<hkaiser> ms[m]: I'm seeing 'The header hpx/include/iostreams.hpp is deprecated, please include hpx/distributed/iostream.hpp instead'
<hkaiser> shouldn't this refer to hpx/iostreams.hpp instead?
weilewei has joined #ste||ar
<ms[m]> hkaiser: we added hpx/distributed/iostream.hpp a few prs ago and it's distributed simply to leave room for a local hpx/iostream.hpp
<hkaiser> ok
<ms[m]> unrelated question, do you have access to the stellarbot github account? I currently set up tokens on my own account for jenkins, but it would be nice to use that account instead for setting jenkins statuses
<hkaiser> ms[m]: I think Thomas should have access
<ms[m]> makes sense to you or would you prefer hpx/iostream.hpp?
<ms[m]> ok, thanks, will ask him
<hkaiser> ms[m]: the question is whether we want to have two separate headers or just one that includes the local version and optionally the distributed version as well
<hkaiser> I think having one header that includes both, the local and possibly the distributed might be preferrable
<hkaiser> other users will have to decide which one they need/want
<hkaiser> otherwise*
<ms[m]> I'd say the hpx/distributed headers would include local as well, the hpx/ headers would include local only?
<ms[m]> at least that's what the plan is now with hpx/future.hpp etc.
<hkaiser> ok, that's the other way around
<hkaiser> in this case users that want to switch from local to distributed operation will have to change their code and not only the configuration
<hkaiser> if we (optionally) include the distributed part into the hpx/<header> changing the configuration would do the trick
<ms[m]> hmm, be back in a bit
<ms[m]> they'd have to change their code anyway going from local to distributed? and when they're distributed they have a hard dependency on distributed hpx?
<hkaiser> ok
<ms[m]> also having hpx/x include both means that local-only users won't be able to include local-only features (except if we add hpx/local/x instead)
<hkaiser> I'd be (slightly) in favor of giving the user one header instead of two in hpx/<header>
<ms[m]> let's discuss this properly still before the release then, because right now we're doing the opposite of that
<ms[m]> gtg now, call tomorrow?
<hkaiser> ms[m]: ok
bita_ has joined #ste||ar
<nikunj97> hkaiser, I'm giving up for now. I'll add the distributed functionality using a different name for now. Once we have things working, I'll refactor it. Otherwise I'll scratch my head for days.
weilewei has quit [Remote host closed the connection]
<hkaiser> nikunj97: I can help
<nikunj97> hkaiser, so here's the thing. There is a single access point of entry to the function: template <typename... Params>
<nikunj97> decltype(auto) asyn_replay(Params&& param...)
<nikunj97> from here, I decide to get a decayed version using: auto helper = detail::make_async_replay_helper<Params>::get_decay_ptr(std::forward<Params>(param...));
<nikunj97> I'm having trouble specializing make_async_replay_helper
<nikunj97> the variadic template can be of the form, (predicate, function, function arguments) or (function, function arguments)
karame_ has quit [Remote host closed the connection]
<nikunj97> while specializing, I can't exactly find the partial specialization that will differentiate the two
nanmiao11 has quit [Remote host closed the connection]
diehlpk__ has joined #ste||ar
<nikunj97> I can't use std::is_invocable coz both predicate and function are invocable
<nikunj97> function is not an action, so I can't use hpx traits on it either
weilewei has joined #ste||ar
nanmiao11 has joined #ste||ar
<hkaiser> nikunj97: yah, that's a problem
<hkaiser> do these functions have the same prototype?
<nikunj97> sadly yes
<nikunj97> hkaiser, the return types are the same for predicate and function
<nikunj97> wait no. The predicate returns a bool. Sadly, a function can also return a bool type.
<nikunj97> So we can't really differentiate based on the prototypes
diehlpk__ has quit [Ping timeout: 260 seconds]
<gonidelis[m]> hkaiser: Could you explain what `cmake/tests/cxx20_no_unique_address_attribute.cpp` us doing?
<gonidelis[m]> is ^^
<K-ballo> it should be checking for c++'s 20 [[no_unique_address]], what is it actually doing?
<gonidelis[m]> So this `[[no_unique_address]]` thing is checking wether a variable is an empty type and just avoids using memory for this specific variable....
<gonidelis[m]> As far as I understand.
<K-ballo> something like that, yes
<gonidelis[m]> Ok and what about this `in_out_result`. I don't quite get it
<gonidelis[m]> hkaiser: implemented it here
<K-ballo> that's a ranges thing, did you see the spec for it?
<gonidelis[m]> yeah... the spec references it on the link I posted above
<gonidelis[m]> but it does not explain much
weilewei has quit [Remote host closed the connection]
<K-ballo> uh?
<K-ballo> the ranges spec, from C++20
<gonidelis[m]> I thought that was the spec
<gonidelis[m]> at least that's where we 're always looking
weilewei has joined #ste||ar
<gonidelis[m]> isn't that what I posted ? ;p
<K-ballo> ah, I missed that.. I understood the spec linked into hpx code and that made absolutely no sense :P
<K-ballo> so yeah, what's the question?
<gonidelis[m]> What is it doing?
<K-ballo> nothing
<K-ballo> it doesn't have any invariants nor responsibilities
<gonidelis[m]> it's like it takes `I` and `O` types and it returns `in` and `out` ?
<K-ballo> it's a struct.. a struct with members `in` and `out`.. meant to be used as result type for algorithms
<gonidelis[m]> hmm... ok
<gonidelis[m]> can't really distinct the usefulness on that but let's accept it the way it is
<K-ballo> what are the alternatives?
<gonidelis[m]> no alternatives, that's true :)
<gonidelis[m]> it's the standard after all
<K-ballo> no, I mean, what alternative did the standard had?
<gonidelis[m]> oh... I don't know. How could search for them?
<K-ballo> I mean, what alternative do you think there is to return two things
<gonidelis[m]> I have no idea. If it's that trivial then why create a separate struct for that....
<K-ballo> what makes you think it is trivial?
<K-ballo> how would you design an algorithm that returns two pieces of information?
<gonidelis[m]> just create a struct that returns two things, name it `return_two_things` and then use it whenever you want to return two things (??)
<K-ballo> that sounds like std::pair, we already made that mistake in the past
<gonidelis[m]> The fact that it is doing nothing as you mentioned above makes me think that it is trivial
<K-ballo> like why not std::pair<I, O> copy(...) { return {in, out}; } ?
<gonidelis[m]> hmmm.... didn't know that. I saw that hkaiser fixed it on this PR. Why was it a mistake after all?
<K-ballo> hah, were we actually returning std::pair before? :P
<K-ballo> because you don't know what's on .first and what's on .second without getting the information from the documentation, and plastering it in the code in comments
<K-ballo> with .in and .out it's obvious what's what
<K-ballo> for a brief time ranges had a tagged pair/tuple, which augmented a pair/tuple with tags per element, so once could say std::get<tag::in>(r) instead of std::get<0>(r)
<K-ballo> some of those were tagged_pairs, some of those were plain pairs, odd
<K-ballo> tagged_pair is what experimental::ranges had
<hkaiser> K-ballo: yes, I'm removing those now step by step
<gonidelis[m]> hkaiser: btw that #4821 PR is great! Bravo!
<hkaiser> K-ballo: we were using std::pair internally but turned that into a proper tagged_pair on the interface
<K-ballo> ah ok, that makes sense
<hkaiser> gonidelis[m]: would you be interested in doing the same for for_each?
<gonidelis[m]> for sure...
<hkaiser> great!
<gonidelis[m]> comforming to c++20 is the Holy Grail after all...
<gonidelis[m]> Do you think that it would be a good idea to work on that, in parallel with the algorithm adaptations ?
<gonidelis[m]> hkaiser: ^^
<hkaiser> gonidelis[m]: I think this _is_ algorithm adaptation ;-)
<hkaiser> so it's 100% aligned with gsoc, I think - would you agree?
<gonidelis[m]> was going to say that these two jobs are closely related, but your statement for sure fits better in that case
<gonidelis[m]> I am working on `transform` right now but I will give for_each a try while I am doing transform
<hkaiser> gonidelis[m]: no rush - transform is fine - you might want to create the cpos while you're at it there ;-)
<gonidelis[m]> hm... yeah that sounds better. Let me try it...
<gonidelis[m]> Just to be clear: the ultimate goal for each algo is to create all the existing overloads that are defined on the c++20 standard. Right?
<gonidelis[m]> hkaiser:
<hkaiser> yes
<gonidelis[m]> great... let's do that :)))
<gonidelis[m]> (sth tells me that I am going to live in the world of rabitholes for quite some time ;p)
<gonidelis[m]> hkaiser: yt?
<hkaiser> gonidelis[m]: here
<gonidelis[m]> according to this
<gonidelis[m]> http://eel.is/c++draft/algorithms#alg.transform
<gonidelis[m]> Should I keep the implementations that alreadt exist and then add the ranges-implementations?
<hkaiser> gonidelis[m]: yes
<hkaiser> for copy I reduced the hpx::copy to require equality of the iterators, but for hpx::ranges::copy this is not a restriction
<gonidelis[m]> But aren't the ranges implementations already provide the interface for the initial ones?
<hkaiser> yes
<gonidelis[m]> ohh... ok so I still adapt the inital ones in order not to require equal iterators
<hkaiser> I made it so that the hpx::copy algorithms require iterbegin == iterend
<gonidelis[m]> grr... sorry :/
<hkaiser> the hpx::ranges::copy do not require that
<gonidelis[m]> ok... so it's like I don't touch anything at all on the hpx::transform
<hkaiser> also you might want to replace the tagged_tuple with unary_transform_result or binary_transform_resul
<hkaiser> t
<hkaiser> gonidelis[m]: except for the return values
<hkaiser> make the hpx::<algorithm> 100% the same as std::<algorithm> and hpx::ranges the same as std::ranges
<gonidelis[m]> d
<gonidelis[m]> Do unary/binary_transform_reslt exist?
<hkaiser> no, those need to be added
<hkaiser> see how it's done for copy
<gonidelis[m]> ok great... I am on it ;) (that should take a little more than I expected)
<hkaiser> gonidelis[m]: no rush, rabbitholes are cosy
<gonidelis[m]> ok so that's like, I use unary/binary_transform_result as is, except when I use it for the algorith result, which in that case it should be transform_result
<hkaiser> gonidelis[m]: also note that I left the algorithms in hpx::parallel:: untouched, they just got a HPX_DEPRECATED()
<gonidelis[m]> !!! forget what I said above ^^
<hkaiser> using ranges::binary_transform_result = util::in1_in2_out_result< I1, I2, O >;
<gonidelis[m]> yeah yeah... sorry
<gonidelis[m]> just saw that
<gonidelis[m]> `HPX_DEPRECATED() ` ??
<hkaiser> yah
<hkaiser> so after your changes we have 3 sets of algorithms
<hkaiser> hpx::parallel: those are the old ones - leave untouched except for the deprecation warning
<gonidelis[m]> Can't really find deprecation thing on your pr....
<gonidelis[m]> :/
<hkaiser> hpx:: those should conform to std::, and hpx::ranges:: should conform to std::ranges
<hkaiser> also note, that hpx::ranges:: will expose paralle algorithms (on top of what std:;ranges has
<gonidelis[m]> ok yeah :)
<gonidelis[m]> ahh it was because github didn't load the long diff... sorry
<gonidelis[m]> ok great! :)))) I think I am 100% on that...
<hkaiser> gonidelis[m]: it's a lot of editing work, sorry for that - but in the end it will make things much more conforming
<gonidelis[m]> can't wait to see the result
<gonidelis[m]> i love it when it gets more complex so please don't apologise
<gonidelis[m]> minor detail: I will fix calng-format on CONCEPT_REQUIRES along with the deprecation warnings ;)
nikunj97 has quit [Quit: Leaving]
<hkaiser> gonidelis[m]: thanks!
<hkaiser> \cond/\endcond make things in-between disappear for DOXYGEN
<gonidelis[m]> but i don't see them on the new copy
<gonidelis[m]> hkaiser: ^^
<hkaiser> gonidelis[m]: we don't needs them there as I wrapped all doxygen comments into a #ifdef DOXYGEN/#endif instead
<gonidelis[m]> ok, I will change that on transform accordingly
<hkaiser> thanks
<gonidelis[m]> hkaiser: why was that needeD?
<hkaiser> gonidelis[m]: because the std algorithm returns only one value, the ranges algorithm returns two
<gonidelis[m]> hkaiser: but that's under hpx::parallel , aren't the std:: algorithms go under hpx:: ?
<hkaiser> uhh, did I change it there as well?
<hkaiser> I'll fix it
<hkaiser> gonidelis[m]: thanks for pointing this out
<gonidelis[m]> np ;)
nikunj97 has joined #ste||ar
<hkaiser> gonidelis[m]: see #4837, should be fine now
<gonidelis[m]> ok I think that's better but before we move on merging
<gonidelis[m]> Could you explain why this change was made?
<gonidelis[m]> again under hpx::parallel
<gonidelis[m]> We are letting begin!=end iterator
<gonidelis[m]> I know that this is not much of a problem as these versions will be considered obsolete by the end of this run but still...
<gonidelis[m]> hkaiser: ^^
<hkaiser> did I screw up another spot?
<gonidelis[m]> let's not rush... I could be missing sth after all
<hkaiser> gonidelis[m]: what's wrong there?
<gonidelis[m]> `InIter first, InIter last` became `InIter first, Sent last`
<gonidelis[m]> under hpx::parallel namespace
<hkaiser> can't see that, which file is it?
<hkaiser> gonidelis[m]: well, this is fine, I think
<hkaiser> the implementation of copy was adapted to sentinels along the way
<hkaiser> it's used by all algorithms
<hkaiser> the old and the new ones
<gonidelis[m]> aahhhh.....
<gonidelis[m]> my mistake then
<gonidelis[m]> so the ::detail namespace could be under hpx::parallel
<gonidelis[m]> but it could be used by the hpx:: algos too.
<gonidelis[m]> is that right?
nanmiao11 has quit [Remote host closed the connection]
nanmiao11 has joined #ste||ar
<hkaiser> gonidelis[m]: I left it there, is all
<hkaiser> the implementation could go wherever
<gonidelis[m]> ok ok... great. I am still familiarizing with the namespacing thing ;) #4837 seems good then :)
<gonidelis[m]> did it pass the unit tests?
weilewei has quit [Remote host closed the connection]
weilewei has joined #ste||ar
weilewei has quit [Remote host closed the connection]