hkaiser changed the topic of #ste||ar to: The topic is 'STE||AR: Systems Technology, Emergent Parallelism, and Algorithm Research | stellar.cct.lsu.edu | HPX: A cure for performance impaired parallel applications | github.com/STEllAR-GROUP/hpx | Buildbot: http://rostam.cct.lsu.edu/ | Log: http://irclog.cct.lsu.edu/
hkaiser has quit [Ping timeout: 260 seconds]
Vir has quit [Ping timeout: 264 seconds]
Yorlik has quit [Ping timeout: 240 seconds]
rori has joined #ste||ar
<jbjnr> I have made some changes to the GSoC page here. https://github.com/STEllAR-GROUP/hpx/wiki/HPX-Google-Summer-of-Code-(GSoC)-2020 however, I've forgotten what we discussed last week and I think we agreed to combine the parallel algorithms tasks into one bigger project?
<simbergm> jbjnr: don't remember exactly either, but I guess it could be one project as long we make it clear that a student isn't expected to complete all three projects that we have there now
K-ballo1 has joined #ste||ar
K-ballo has quit [Read error: Connection reset by peer]
K-ballo has joined #ste||ar
Yorlik has joined #ste||ar
K-ballo1 has quit [Ping timeout: 248 seconds]
hkaiser has joined #ste||ar
hkaiser has quit [Ping timeout: 245 seconds]
hkaiser has joined #ste||ar
<jbjnr> Yorlik: FYI the PR here https://github.com/STEllAR-GROUP/hpx/pull/4306 could be used to solve your initialization of things on a once per thread/core basis. The test included in the PR shows how to use bound tasks to assign to a given core and it does a round robin test of the functionality. Note that this only works with the shared-priority scheduler - see the init part of the test.
<Yorlik> Thanks jbjnr - I'll look into it.
<Yorlik> jbjnr: Looking through the discussion inb this PR this seems like an API redesign task to me, in terms of Scott Meyers "Make APIs easy to use and hard to misuse".
<Yorlik> That rule also for me is the reason why I prefer hard errors over hard to debug runtime oddities.
<K-ballo> I see you've learned all the catchy phrases by now
<Yorlik> Honestly - my decades of experience in life tell me, that when bad things happen it usually is because simple things are neglected and not the complicated things get out of hand.
<jbjnr> Yorlik: we have to do quite a lot of refactoring anyway to keep up with the latest executors proposal.
<Yorlik> So - yes - I like simple catchy phrases for a reason.
<Yorlik> You mean from the CPP guys?
<K-ballo> who are the CPP guys?
<jbjnr> not us apparently
<Yorlik> The stnadards committe and people submitting proposals there
<jbjnr> yes. those ones.
<K-ballo> so like jbjnr and hkaiser and me?
<Yorlik> Or is it an internal hpx proposal?
<jbjnr> no the cpp one
<Yorlik> IC. Thats one thing I really like about hpx: staying close to the standard.
<jbjnr> our current API is based off an older revision o the proposal
<Yorlik> IC.
hkaiser has quit [Quit: bye]
hkaiser has joined #ste||ar
<diehlpk_work> The links to the SC 19 workshop proceedings papers
<hkaiser> diehlpk_work: thanks, I have updated the publications sites
hkaiser has quit [Ping timeout: 248 seconds]
hkaiser has joined #ste||ar
zao has quit []
zao has joined #ste||ar
<heller2> jbjnr: i already started on it...
<heller2> But it's quite intense to implement
rori has quit [Quit: WeeChat 1.9.1]
RostamLog has joined #ste||ar
<primef> Oh I didn't know about that, good you told me! So do I have to take further actions, such as refactor my application or use different APIs or is it enough to set the MPI_PARCELPORT build flag?
<primef> simbergm
K-ballo has quit [Ping timeout: 265 seconds]
K-ballo has joined #ste||ar
<simbergm> primef: no, application code stays the same
<simbergm> I think if you have the mpi parcelport enabled it'll take precedence over the tcp one
<simbergm> if not you might need to pass `--hpx:ini=hpx.parcel.mpi.enable=1` to the application
<primef> Alright, thank you! I will for sure try that. How do I make sure MPI is used over TCP? Is there some function I can call to check? Otherwise, I will use `--hpx:ini=hpx.parcel.mpi.enable=1` directly to be sure
<simbergm> primef: don't remember (I'm not a very active user of the parcelports...), if you're still around tomorrow I'll look it up
<simbergm> hkaiser would definitely know
<primef> alright, thank you so much! Yes, I guess I will be here. I put the channel on auto-join, so when my computer is on I'm online :-D. So just mention me and I will be there!
<primef> Another question to which you may have an answer: is it correct that jemalloc is one of the most performant allocators around?
<simbergm> good, hope to see you around here
<simbergm> yeah, jemalloc is pretty good, but so are the others
<simbergm> it performs roughly the same as tcmalloc in most cases
<simbergm> mimalloc is a new one which seems even faster in some limited tests, but in the end you'll have to try them out and see what works best for your application
<simbergm> note that it's most likely not something you need to be testing in the beginning, this is when you can't think of other ways to speed up your application ;)
<heller2> primef: the MPI parcelport should be automatically chosen once you run your program with mpirun
hkaiser has joined #ste||ar
<heller2> without it, the TCP one is chosen
<simbergm> ah right, thanks heller
<primef> simbergm: sounds good, thanks! then I'll ignore it for the moment and stick with jemalloc.
<simbergm> primef: yep, that's a good choice
<primef> heller2: So you mean, just set the build flag HPX_WITH_PARCELPORT_MPI and it is enabled?
<heller2> correct
<simbergm> note that if you're on linux you can also set HPX_WITH_MALLOC=system and LD_PRELOAD another allocator when you actually need it
<primef> heller2: perfect, thanks!
<primef> simbergm: I like it. Only disadvantage I see, is that it is less transparent to other people, what allocator I use. As I develop in a group that might cause some chaos.
<simbergm> yeah, in that case it's probably better to make the choice upfront
<heller2> ms: how many responses did you get already?
<simbergm> heller 10 when I last checked earlier today
<heller2> cool, not bad
<simbergm> I can share the results already with you'd like (it's a Google sheet...)
<simbergm> Yeah, quite happy with it
<heller2> yes, please
<hkaiser> yes please do share it
<hkaiser> simbergm: ^^
<primef> hkaiser: hi, regarding #3646, once I implement a fix should I build, commit, push and let the automatic tests do the work? Or should I test them myself? If this is the case, are there any default test cases?
<hkaiser> primef: well you can do both
<hkaiser> PRs are tested by the testing infrastructure, but I'd make sure that it does what it should before creating the PR locally
<hkaiser> but I never run the full test suite locally, only the relevant parts
<hkaiser> primef: also, there is already a PR that implemented #3646 for two algorithms, but that one fails the testing - so it might be a good start to figure out what's wrong with it
<hkaiser> primef: (#3829)
primef1 has joined #ste||ar
<hkaiser> primef1: not sure if you saw my responses above
primef has quit [Ping timeout: 246 seconds]