K-ballo changed the topic of #ste||ar to: STE||AR: Systems Technology, Emergent Parallelism, and Algorithm Research | stellar.cct.lsu.edu | HPX: A cure for performance impaired parallel applications | github.com/STEllAR-GROUP/hpx | Buildbot: http://rostam.cct.lsu.edu/ | Log: http://irclog.cct.lsu.edu/
<jaafar>
k-ballo[m]: do HPX ranges have operations somewhere (e.g. removing the first element etc.)? If not are they compatible with boost ranges?
<jaafar>
I ask because your name is in one of the files :)
<jaafar>
maybe that should have been K-ballo
<K-ballo>
both get to me
<K-ballo>
we don't have actual ranges
<K-ballo>
if you bring your own we can use them
<K-ballo>
I don't know about boost ranges, but if they have begin/end things should work
<jaafar>
My goal is to remove the first element of a "shape" iterator range and do something with it
<K-ballo>
the ones from ranges-v3 _should_ be fine too
<jaafar>
The first chunk of the scan algorithms has a special case
<hkaiser>
jaafar: just increment the first element in a range?
<jaafar>
Perhaps I can just call "next" and reestablish the range
<jaafar>
yeah
<K-ballo>
probably turn the argument into iter-sentinel pairs, increment iter, then repack as a range
<jaafar>
OK! that should do it
<jaafar>
thank you
K-ballo has quit [Quit: K-ballo]
hkaiser has quit [Quit: bye]
diehlpk_work has quit [Remote host closed the connection]
jehelset has quit [Remote host closed the connection]
bita has quit [Ping timeout: 264 seconds]
nanmiao11 has quit [Ping timeout: 240 seconds]
mihir98 has joined #ste||ar
mihir98 has quit [Quit: Connection closed]
mihir98 has joined #ste||ar
peltonp1_ has quit [Quit: leaving]
K-ballo has joined #ste||ar
hkaiser has joined #ste||ar
nanmiao11 has joined #ste||ar
K-ballo has quit [Read error: Connection reset by peer]
diehlpk_work has joined #ste||ar
K-ballo has joined #ste||ar
<gonidelis[m]>
K-ballo: is there any pdf format of the std proposals?
<K-ballo>
if the author submitted a pdf
<gnikunj[m]>
gonidelis[m] print the page as pdf?
<gonidelis[m]>
i thought of it
<gonidelis[m]>
gnikunj[m]: thanks ;)
<gonidelis[m]>
i will see how it goes. I don't know if reading the code would be easy without colors
<gnikunj[m]>
yeah, I faced similar issues :/
RostamLog has joined #ste||ar
<K-ballo>
"colors"
<gonidelis[m]>
K-ballo: ??
<K-ballo>
kids these days
<gonidelis[m]>
K-ballo: IDEs' luxury made us spoiled
<gonidelis[m]>
K-ballo: i don't consider my lack of skill to operate punch cards, byproduct of the luxury though
<jedi18[m]1>
Hi! So I'll finish going through the c++ summer lecture videos by today and I've also started reading a book on C++ templates. At what point will I be ready to tackle those issues that hkaiser
<jedi18[m]1>
mentioned*?
<gonidelis[m]>
jedi18: reading is good. but compiling and failing and then fixing your commits is better ;)
<jedi18[m]1>
Haha yeah I guess I would learn a lot better if I start contributing. Is there any urgency to fixing those issues or can I take my time with it?
<gonidelis[m]>
jedi18: take your time
<gonidelis[m]>
being efficient is preferable than being fast
<gonidelis[m]>
~hkaiser~
<zao>
There's a fair bit to learn from just getting through the building of HPX and a toy application using it.
<zao>
Especially when it inevitably breaks on you somewhere, as is tradition with any codebase you download :D
<zao>
(not saying HPX is exceptionally bad)
<jedi18[m]1>
Hmm yeah I haven't even tried using hpx yet, I'll play around with it for a while first then
<zao>
Sometimes it's easier to understand the guts of the code if you know how to use it on the surface and what the overall structure is.
<jedi18[m]1>
True, but that again brings up the problem of not learnt parallel programming yet. Would it be wise to directly start with hpx or would a better approach be to learn openmpi first and then try hpx?
<gonidelis[m]>
jedi18: "if you can segment/distribute your workload directly, do it. if there are variables shared within your 'workers' the treat these variables with care"
<gonidelis[m]>
I am oversimplifying it but now you know how parallel programming works
<jedi18[m]1>
So what you're saying is I'm making up too much of a big deal of having to learn parallel programming first and should just start contributing?
<gonidelis[m]>
my point is, you don't have to lose your time in learning other systems
<gonidelis[m]>
jedi18: yes. in a way. actually i mean, don't be afraid of your not knowing ;) you will soon get the concepts
<gonidelis[m]>
writting the code is what's demanding
<gonidelis[m]>
fwiw I got into HPX with a background knowledge of parallel programming and I was eager to use that knowledge. it's been a year and I have spent 1% of my time utilizing parallelism so far. It just happens that I am not working in that corner of the library yet ;) so don't be afraid
<jedi18[m]1>
Ohh ok thanks, that helps :)
<jedi18[m]1>
Let me know whenever you have a suitable issue for me to tackle
<gonidelis[m]>
there is already one open, for the ambitious GSoC students.
<gonidelis[m]>
jedi18: hint: you just change `tag_invoke`s to `tag_fallback_invoke`s. Imitate #5176
<gonidelis[m]>
hint2: ask questions.
<jedi18[m]1>
Ohh ok thanks! I guess I'll try this then.
<gonidelis[m]>
you ll need some time. don't give up ;)
<gonidelis[m]>
it's easier than it looks
<jedi18[m]1>
Do I run the tests to see if it's working?
<jedi18[m]1>
<gonidelis[m] "it's easier than it looks"> Ok but yeah sure but give me some time, I don't even know what I'm supposed to do
<gonidelis[m]>
<jedi18[m]1 "Do I run the tests to see if it'"> jedi18 yes
<gonidelis[m]>
<jedi18[m]1 "Ok but yeah sure but give me som"> No rush
<diehlpk_work>
GSoD: Organization applications are now open! The deadline to apply is March 26, 2021 at 18:00 UTC.
<gonidelis[m]>
diehlpk_work: I could easily spam the wiki ;p
<diehlpk_work>
gonidelis[m], Sure, but we need mentors having time to meet with the student once a week.
<diehlpk_work>
it is not only about having projects, we need two mentors committing time to GSoD
<hkaiser>
diehlpk_work: we'll find mentors
<jedi18[m]1>
"We could probably do a sweep through the already converted algorithms to make them use tag_invoke_fallback in one go." Which already converted algorithms?
<jedi18[m]1>
So I have to figure out which overloads of the those algorithms are non-segmented and for those replace tag_invoke with tag_fallback_invoke?
<hkaiser>
yes
<hkaiser>
everything in the hpx/libs/parallelism/algorithm/parallel/ folder
<hkaiser>
look at for_each as an example
<jedi18[m]1>
Oh ok got it, thanks
<hkaiser>
jedi18[m]1: also, I'd suggest not to try doing all of them at once, rather focus on one of the algorithms at a time and create separate PRs
<jedi18[m]1>
Oh ok well looking at adjacent_find.hpp for example, it only has segmented algorithms so I won't need to make any changes to that right?
bita has joined #ste||ar
<jedi18[m]1>
How will I figure out if it's a segmented algorithm? Like for adjacent_find, the using is_segmented provided a clue but that isn't true everywhere right?
<gonidelis[m]>
the ones that actually have a segmented overload are exposed through an `algorithm_` interface
<gonidelis[m]>
for example `for_each` used to have a `for_each_` dispatch function that guided the copmiler to either the parallel algorithms or the segmented algorithm
<gonidelis[m]>
segmented means distributed
<gonidelis[m]>
tw
<gonidelis[m]>
btw^^
<gonidelis[m]>
So what I would do, is git grep 'adjacent_find_('
<gonidelis[m]>
if i got a result, that probably means that this function is used as a dispatcher for the segmented counterpart
<jedi18[m]1>
Ohh ok, thanks!
<gonidelis[m]>
jedi18: hint: you can also look into `hpx/libs/full/segmented_algorithms/include/hpx/parallel/segmented_algorithms`
<gonidelis[m]>
you will be mainly looking under /libs (even I haven't got outside of this subdir yet)
<gonidelis[m]>
now `libs/parallelism` is about the parallel part of the library while `libs/full` includes the distributed part also
<gonidelis[m]>
afaik
<jedi18[m]1>
<gonidelis[m] "now `libs/parallelism` is about "> So I compare the two to figure out which are parallel?
<hkaiser>
jedi18[m]1: I don't think you have to worry about the segmented algorithms at this point
<hkaiser>
just ignore them, you shouldn't need to touch them
<jedi18[m]1>
hkaiser:
<jedi18[m]1>
What I meant was how do I figure out which are segmented and which are not?
<jedi18[m]1>
But yeah I think I'll be able to figure it out
<jedi18[m]1>
<hkaiser "everything in the hpx/libs/paral"> I thought I was meant to change the ones in parallel/container_algorithms
<jedi18[m]1>
Wait sorry just a bit confused, give me some time and I'll figure it out
<gonidelis[m]>
`tag_invoke` to `tag_fallback_invoke`
<gonidelis[m]>
+ include the right headers
<gonidelis[m]>
+ one minor nitpick
<gonidelis[m]>
plus^^
<jedi18[m]1>
Ok but like you said I can find two adjacent_find_ in \libs\full\segmented_algorithms\include\hpx\parallel\segmented_algorithms\adjacent_find.hpp so does that mean both the overloads in this file are segmented? If so, shouldn't I not be changing them to tag_fallback_invoke?
<jedi18[m]1>
Is that where they are being dispatched from?
<jedi18[m]1>
Ok no no I'm confused again
<jedi18[m]1>
Sorry, the codebase is confusing :/
<jedi18[m]1>
Do I just change all the tag_invoke to tag_fallback_invoke in hpx/libs/parallelism/algorithm/parallel/ folder?
<gonidelis[m]>
<jedi18[m]1 "Ok but like you said I can find "> Bosth overloads lead to segmented implementations. But as hkaiser suggested, you don't have to bother at all as far as your PR is concerned for the /segmented_algos directory
<gonidelis[m]>
if you are trying to understand the machinery: yes. this is where they are dispatched
<gonidelis[m]>
<jedi18[m]1 "Do I just change all the tag_inv"> u got it ;)
<gonidelis[m]>
this alone probably won't compile though. take a closer look to hkaiser's for_each PR, and try to understand what other changes need to be made
<jedi18[m]1>
Wait really? I thought those files contained a mix of segmented and non-segmented overloads and I had to figure out which ones to change by looking at the function signature or something?
<jedi18[m]1>
<gonidelis[m] "this alone probably won't compil"> Oh ok
<gonidelis[m]>
don't get confused with the underscore overloads. these are just dispatchers, not actual implementations
<jedi18[m]1>
Oh ok thanks
<jedi18[m]1>
<gonidelis[m] "this alone probably won't compil"> Seems to build fine after adding tag_fallback_invoke header. Is that all that's required or am I missing something?
<gonidelis[m]>
if it compiles then it's fine
<jedi18[m]1>
I should probably run the tests too?
<gonidelis[m]>
if the corresponding algorithm test compiles then you 'd probably be ok