K-ballo changed the topic of #ste||ar to: STE||AR: Systems Technology, Emergent Parallelism, and Algorithm Research | stellar.cct.lsu.edu | HPX: A cure for performance impaired parallel applications | github.com/STEllAR-GROUP/hpx | Buildbot: http://rostam.cct.lsu.edu/ | Log: http://irclog.cct.lsu.edu/
diehlpk_work has joined #ste||ar
K-ballo has quit [Quit: K-ballo]
diehlpk_work has quit [Remote host closed the connection]
hkaiser has quit [Quit: bye]
hkaiser has joined #ste||ar
<rachitt_shah[m]>
Hey ms I've filled in the form for GSoD, please let me know if you would like to interview me/any other step forward for GSoD.
<rachitt_shah[m]>
Thank you, and looking forward to work with you folks for creating some amazing docs!
<ms[m]>
srinivasyadav227, hkaiser ci did run, it just didn't show up because there were newer commits after the retest
<hkaiser>
ms[m]: I added srinivasyadav227 to the repo, shouldn't the test run without a retest now?
<ms[m]>
I've added you to the whitelist now
<ms[m]>
probably not if you added him after the latest commits
<ms[m]>
they should run whenever he commits though
<hkaiser>
ms[m]: is that a jenkins whitelist?
<ms[m]>
yeah
<hkaiser>
ahh, could you add jedi18[m] as well, pls?
<ms[m]>
I added him manually, but you can also comment with "add to whitelist"
<ms[m]>
already done
<hkaiser>
thanks
<ms[m]>
tests should run now automatically, but I wouldn't be surprised if there are kinks, so let me know they still don't run
<srinivasyadav227>
okay, sure ;-) thanks!
<ms[m]>
rachitt_shah: thanks! we'll let you know soon about next steps
<rachitt_shah[m]>
Thank you ms , looking forward to them!
hkaiser has quit [Quit: bye]
<ms[m]>
hkaiser: sleepless night? :/
K-ballo has joined #ste||ar
<srinivasyadav227>
ms: the tests here https://github.com/STEllAR-GROUP/hpx/pull/5319 got stuck like same as previous `retest`, all jenkins/cscs checks are running but jenkins/lsu tests are not starting
<itn[m]>
do this error "Valid buildid required" means the same ?
<srinivasyadav227>
itn: yes, i was reffering to "Valid buildid required"
itn[m] is now known as rainmaker6[m]
<rainmaker6[m]>
yeah same was the case with me. I thought there was some error in config files which are needed to be fixed so i ignored them.
hkaiser has joined #ste||ar
<rainmaker6[m]>
<srinivasyadav227 "ms: the tests here https://githu"> But it's clear for me now.
<srinivasyadav227>
rainmaker6: okay, did they get automatically cleared?
<rainmaker6[m]>
cleared in the sense ? executed?
<srinivasyadav227>
i mean, have you applied any changes or did something to solve that?
<rainmaker6[m]>
no
<srinivasyadav227>
okay :)
<rainmaker6[m]>
basically they sometimes gets triggered and sometimes don't.
<srinivasyadav227>
rainmaker6: oh ok alright then, cool, np ;)
joe88 has joined #ste||ar
joe88 has quit [Client Quit]
Girlwithandroid[ has joined #ste||ar
<Girlwithandroid[>
Hi! Is the deadline to apply for GSOD over?
<Girlwithandroid[>
Can someone help with linking the relevant docs/where to submit the proposal?
<ms[m]>
srinivasyadav227, rainmaker6, the lsu ci is currently "out of order"
<ms[m]>
the guy responsible for the cluster there has been out for some time, but we're hoping to get them back to normal soon
<ms[m]>
"invalid build id" usually means "something unrelated to the actual build and tests went wrong"
<srinivasyadav227>
ms: okay :)
<hkaiser>
ms[m]: Alireza has had a difficult tooth surgery last week....
<hkaiser>
sorry for the problems
<ms[m]>
hkaiser: no worries! was just wondering about the status, it takes as long as it takes for him to recover :) please tell him not to stress if I'm making it sound like he should stress about it ;9
<ms[m]>
hkaiser: ok, just had another look, looks like jobs are running again (the last few days no lsu jobs were running successfully, so I thought it was something else going on)
<ms[m]>
the gcc 8 configurations would need some boost modules installed
<rainmaker6[m]>
ms noted :)
<rainmaker6[m]>
<hkaiser "ms: Alireza has had a difficult "> wishes for an early recovery
<hkaiser>
rainmaker6[m]: thanks
hkaiser has quit [Quit: bye]
nanmiao has joined #ste||ar
<zao>
HPX is too hard if you break your teeth biting into it ;)
<gonidelis[m]>
can't believe that google mismatches us with this stupid tape
<ms[m]>
gonidelis[m]: that was only for tests right? if yes, put it in e.g libs/core/iterator_support/tests/include/hpx/iterator_support/test or similar
<gonidelis[m]>
oh ok
<gonidelis[m]>
ms[m]: and then how do i include it in the source?
<ms[m]>
with an interface target which has a target_include_directories or just a target_include_directories on the test directly (the former is preferable)
<ms[m]>
not sure what to call it though, hpx_iterator_support_test is not great, but something in that direction
<gonidelis[m]>
the thing is that i will go with a similar solution for the `libs/parallelism` tests
<gonidelis[m]>
`libs/parallelism/algorithms`
<gonidelis[m]>
^^
<gonidelis[m]>
that's a different module, so that means a different include
<ms[m]>
hence the target, you can link to the target in the algorithms tests as well
<gonidelis[m]>
but don't we want for the modules to be compilable autonomously ?
<gonidelis[m]>
if i create a target in iterator_support the algorithms module which will be using the iter_sent.hpp will be dependent on the iterator_support one
<ms[m]>
yep, that's ok
<gonidelis[m]>
hm ok
<gonidelis[m]>
are we sure that we wanna add an include directory here ?
<hkaiser>
diehlpk_work: the default for the cut_off is ~0x0 (unsigned), so setting any value doesn't make it larger
<diehlpk_work>
Ok, I will try to go back to hpx 1.5
<diehlpk_work>
Dominic and Sagiv used that verison on QB
<hkaiser>
diehlpk_work: let me know if this fixes the issue, please
<diehlpk_work>
-DHPX_WITH_MAX_CPU_COUNT=512 \
<diehlpk_work>
hkaiser, That is per node or and should not affect the network or?
<hkaiser>
this is per node
<hkaiser>
not sure why you 512 cores, though
<hkaiser>
shouldn't 256 be sufficient?
hkaiser has quit [Quit: bye]
<diehlpk_work>
We had it before and Gregor was asking to increase the number
<gdaiss[m]>
No, 256 should be sufficient, we would only exceed that if we use one locality per node and all 4 hyperthreads per core (which we don't)
<gdaiss[m]>
I am merely confused by Patrick's description of the crashes as Octo-Tiger initialization seems to fail when exceeding 256 localities (we use 6 localities per node). No idea why though
<diehlpk_work>
gdaiss[m], It works with 512
<diehlpk_work>
but hangs for the IO
hkaiser has joined #ste||ar
<diehlpk_work>
hkaiser, Funny using 512 lets the code run further for some attempts
<hkaiser>
diehlpk_work: no idea what's wrong
gsodaspirant has joined #ste||ar
<gsodaspirant>
Hello, Is the 2021 GSOD position filled?