K-ballo changed the topic of #ste||ar to: STE||AR: Systems Technology, Emergent Parallelism, and Algorithm Research | stellar.cct.lsu.edu | HPX: A cure for performance impaired parallel applications | github.com/STEllAR-GROUP/hpx | Buildbot: http://rostam.cct.lsu.edu/ | Log: http://irclog.cct.lsu.edu/
hkaiser has joined #ste||ar
K-ballo has quit [Quit: K-ballo]
jehelset has joined #ste||ar
hkaiser has quit [Quit: bye]
wash[m] has quit [Read error: No route to host]
wash[m] has joined #ste||ar
<ms[m]1>
gonidelis[m]: sorry for not replying earlier... the most important ones are `-DCMAKE_BUILD_TYPE=Release -DHPX_WITH_MALLOC=tcmalloc/mimalloc/jemalloc`, `CXXFLAGS=-march=native` may also make a small difference
<gonidelis[m]>
oh should mallocs make a significant difference?
K-ballo has joined #ste||ar
hkaiser has joined #ste||ar
<ms[m]1>
gonidelis[m]: big difference!
<ms[m]1>
the system allocator is pretty terrible when it comes to multithreading
<gonidelis[m]>
ah ms[m] thanks
<gnikunj[m]>
I've usually found that jemalloc works the fastest, tcmalloc a close second, and system is significantly slower than these.
<sestro[m]>
Are there any benchmarks that can give me an indication for which workloads the performance of the allocators differs significantly?
<hkaiser>
sestro[m]: any application should do, allocators are fundamental
<sestro[m]>
Okay, I am using the system allocator right now as I use HPX in a shared library hoping not using jemalloc would not hurt too much.
<hkaiser>
sestro[m]: depends on the platform, on linux you might see significant speedup
<srinivasyadav227>
hkaiser: there is a merge conflict on #5235, its not allowing me to make changes, so should i solve the merge conflict with another commit and push? and you told you have some comments regarding #5254, please tell if any further changes are required for #5235
<hkaiser>
srinivasyadav227: best is to rebase onto master while resolving the conflicts
<hkaiser>
and then force-push to your branch
<sestro[m]>
hkaiser: At some point I tried using a different one but this ended in some horrible conflict between different allocators in different shared libraries/python modules. I probably did something stupid and should explore that again.
<srinivasyadav227>
hkaiser: ok i will do that
Vir has joined #ste||ar
hkaiser has quit [Ping timeout: 252 seconds]
hkaiser has joined #ste||ar
<srinivasyadav227>
hkaiser: Thanks for pushing ;-), i would use rebase from next time, i was not familiar with that, i only knew git merge
<srinivasyadav227>
srinivasyadav227: regarding GSoC project "Add vectorization to par_unseq", i have a doubt, i should implement all theses four right? (unsequenced_policy, unsequenced_task_policy, parallel_unsequenced_policy, parallel_unsequenced_task_policy)
<hkaiser>
srinivasyadav227: they are all the same, essentially
<diehlpk_work>
I would use the base salary there and derive the pay per hour
<diehlpk_work>
And would multiply this value with the hours the technical writer will work for us
<diehlpk_work>
So using the average from the above webpage yields 29.26 per hour
<diehlpk_work>
Do you have time to participate during the six months of the program (April-November 2021)? Project sizes vary, but range from a commitment of 5-30 hours per week during the program.
<diehlpk_work>
So I would just define how much work per week we want to see
<diehlpk_work>
Using this approach the example amount reflects 14 hours per week
<diehlpk_work>
ms[m]1, Let me know what you think?
bita has joined #ste||ar
<srinivasyadav227>
hkaiser: oh, that means I need not implement par_unseq policy again, we already have it, i should use the existing par_unseq policy and support it for parallel alogirthms?, and currently no hpx parallel algorithm supports par_unseq
<hkaiser>
yes
<hkaiser>
srinivasyadav227: that is correct
shubham has joined #ste||ar
<srinivasyadav227>
hkaiser: ok thanks, i think i need to spend much time on proposal and work on PRs little less till April 13, then again i would focus on PRs, is that fine?
nanmiao has joined #ste||ar
<jedi18[m]>
@freenode_hkaiser:matrix.org same here, my exams are starting next week so I probably won't get time to work on any more PRs till they're over (on the bright side, this means no exams during the gsoc period)
<hkaiser>
srinivasyadav227, jedi18[m]: sure, please take your time
<srinivasyadav227>
hkaiser: thank you ;-)
weilewei has joined #ste||ar
shubhu_ has joined #ste||ar
<ms[m]1>
diehlpk_work: yeah, that sounds reasonable
<ms[m]1>
I obviously don't know how much work that project will take but I imagined something like 10-20 hours per week would be reasonable
<ms[m]1>
with 15 hours per week for 12 weeks that's roughly the 5000 that we have there now
weilewei has quit [Quit: Ping timeout (120 seconds)]