hkaiser changed the topic of #ste||ar to: STE||AR: Systems Technology, Emergent Parallelism, and Algorithm Research | stellar.cct.lsu.edu | HPX: A cure for performance impaired parallel applications | github.com/STEllAR-GROUP/hpx | Buildbot: http://rostam.cct.lsu.edu/ | Log: http://irclog.cct.lsu.edu/
<heller>
Why wasn't the failure caught by pycicle/circle?
<simbergm>
heller: it was... also by circleci
<simbergm>
heller: did you notice that thread local storage doesn't seem to work correctly? do you know when it (thread local storage or the test for it) last worked correctly?
<jbjnr>
heller: scroll up a bit. the error was caught by CI, but I could not reproduce it because I didn't see that there was an extra commit on the branch. I tested on daint with my copy of the PR and concuded it was just HPX giving random errors again.
<jbjnr>
my mistake. won't hapen again
<K-ballo>
no more HPX giving random errors again?
<hkaiser>
lol
eschnett has joined #ste||ar
<heller>
simbergm: it never worked reliably. That's why we turned the feature of by default
<heller>
The phylanx guys need that feature. I told hkaiser that the tss test is likely to fail on us
<heller>
Also commented on the pr that enabled it on
<heller>
Circle CI so people should know about it
<heller>
The comment was "let's fix it when it turns up"
<zao>
I love our thread specific pointer. Inspecting it in GDB on most of the BSDs crashes GDB.
<zao>
Very helpful when printing a topology...
hkaiser has quit [Quit: bye]
hkaiser has joined #ste||ar
stevenrbrandt has joined #ste||ar
<stevenrbrandt>
I'm on an 80 core machine
<stevenrbrandt>
I compiled HPX with HPX_MORE_THAN_64_THREADS=On
<stevenrbrandt>
and I still get RuntimeError: partitioner::add_resource: Creation of 2 threads requested by the resource partitioner, but only 1 provided on the command-line.
<stevenrbrandt>
what am I missing?
<stevenrbrandt>
Sorry, I get that error message from Phylanx
<stevenrbrandt>
python3 -c 'import phylanx'
<simbergm>
stevenrbrandt: you need to set HPX_WITH_MAX_CPU_COUNT=80 or more (cpu count includes hyperthreads)
<zao>
stevenrbrandt: The flags you should use are the ones with _WITH_ in the name, you might've set some result flag, that bypasses some checks.
<zao>
Bah, simbergm beat me to it... I blame train wifi :)
aserio has quit [Ping timeout: 250 seconds]
aserio has joined #ste||ar
<aserio>
heller: yt?
<heller>
aserio: hey
<aserio>
I was wondering if you were following the IJHPC paper cleanup
<heller>
I got sick right on tuesday, was essentially tied to my bed
<heller>
I am still at home
<aserio>
awe, that's no good
<heller>
no :(
<aserio>
I hope you are feeling a bit better
<heller>
yes
<heller>
I am in front of my computer right now ;)
<aserio>
heller: well once you are up to it, hkaiser and I would really appreciate it if you would help get that paper out of the door
<heller>
yes, that's on my high priority list
<heller>
i want to get it out as well
<zao>
Sorry for not getting the BSD stuff done by the way, ran into some weirdness where no threads were assigned properly, topology is a mess :)
<zao>
I guess I should push the stuff I have to my repo.
<heller>
aserio: i'll try to work on it tomorrow
<aserio>
heller: Thanks!
stevenrbrandt has quit [Quit: Page closed]
david_pfander has quit [Ping timeout: 246 seconds]
aserio has quit [Ping timeout: 250 seconds]
* K-ballo
is not having much luck with vcpkg
jaafar has joined #ste||ar
jaafar_ has joined #ste||ar
jaafar has quit [Ping timeout: 260 seconds]
jgolinowski has joined #ste||ar
<diehlpk_work>
hkaiser, see pm
jaafar_ has quit [Remote host closed the connection]
jaafar has joined #ste||ar
aserio has joined #ste||ar
jaafar has quit [Quit: Konversation terminated!]
jaafar has joined #ste||ar
<simbergm>
jgolinowski: I guess you saw the message to your opencv pr? he seems happy ? I think you just need to clarify to him what's missing and if he's okay with the current state
<jgolinowski>
simbergm, you mean that some tests are not passing?
<simbergm>
yeah
<jgolinowski>
I am currently rebuilding everything (starting from HPX) and will rerun some tests and start looking into why it breaks
<simbergm>
ok, nice
<simbergm>
note that it might be something we can't fix
<simbergm>
or not directly in hpx at least
<simbergm>
but I don't know
<jgolinowski>
yes I am aware - so far do not have a good idea where to start even :P