aserio changed the topic of #ste||ar to: STE||AR: Systems Technology, Emergent Parallelism, and Algorithm Research | stellar.cct.lsu.edu | HPX: A cure for performance impaired parallel applications | github.com/STEllAR-GROUP/hpx | Buildbot: http://rostam.cct.lsu.edu/ | Log: http://irclog.cct.lsu.edu/
jakemp has quit [Ping timeout: 240 seconds]
EverYoun_ has joined #ste||ar
EverYoung has quit [Ping timeout: 255 seconds]
EverYoung has joined #ste||ar
EverYoung has quit [Remote host closed the connection]
EverYoung has joined #ste||ar
EverYoun_ has quit [Ping timeout: 240 seconds]
<github>
[hpx] hkaiser force-pushed pool_elasticity from a15c9ff to 55ea677: https://git.io/vFFsk
<github>
hpx/pool_elasticity 55ea677 Hartmut Kaiser: Adding enable_elasticity option to pool configuration...
<github>
[hpx] hkaiser force-pushed pool_elasticity from 55ea677 to 5a082bb: https://git.io/vFFsk
<github>
hpx/pool_elasticity 5a082bb Hartmut Kaiser: Adding enable_elasticity option to pool configuration...
<hkaiser>
K-ballo: I like the bind_front/bind_back stuff
<K-ballo>
yeah?
<hkaiser>
yah, it simplifies things
<K-ballo>
I do less now after having it implemented
<K-ballo>
I did it mostly because LEWG said there were no geniune use cases for bind_back, but I can finish it and open a PR
<hkaiser>
I'd appreciate this!
<hkaiser>
that gives you the use case they are looking for
<K-ballo>
yeah, I figured we'd be using bind_back all over the place
<K-ballo>
because of those variadic arguments from users
<hkaiser>
nod
EverYoung has quit [Remote host closed the connection]
EverYoung has joined #ste||ar
EverYoung has quit [Ping timeout: 240 seconds]
EverYoung has joined #ste||ar
EverYoung has quit [Ping timeout: 250 seconds]
quaz0r has quit [Quit: WeeChat 2.0-dev]
diehlpk has joined #ste||ar
eschnett has joined #ste||ar
K-ballo has quit [Quit: K-ballo]
hkaiser has quit [Quit: bye]
diehlpk has quit [Ping timeout: 248 seconds]
wash has quit [Ping timeout: 248 seconds]
wash has joined #ste||ar
diehlpk has joined #ste||ar
diehlpk has quit [Ping timeout: 248 seconds]
jaafar_ has quit [Ping timeout: 248 seconds]
parsa has joined #ste||ar
parsa has quit [Quit: Zzzzzzzzzzzz]
<github>
[hpx] sithhell deleted remove_component_factory at a7fc77a: https://git.io/vFbfl
parsa has joined #ste||ar
jakemp has joined #ste||ar
<wash[m]>
hkaiser, heller: what thing are you using to run unit test these days? Ctest?
<heller>
wash[m]: yeah
<wash[m]>
Do you use cdash or something to collect the results from everywhere?
<heller>
wash[m]: the main issue is that we only execute the tests with ctest. If we'd also include fetching the repo and building, we'd get far better information out of cdash
<wash[m]>
What do you mean? Cmake is the build system
<wash[m]>
Oh you mean it only shows test results, not the results of the builds? Why not (is that just a limitation of cdash?)
<jbjnr>
wash[m]: he means that ctest delivers the test resultsa and we display them on the dashboard, but using ctest you can also do the update and build steps and deliver the changes etc to the dashboard, so on our dashboard we see test results - but not what changed, or any build errors/warning etc
<github>
hpx/master 563705e Thomas Heller: Fixing unique_ptr move for older gcc and clang versions...
<jbjnr>
yes, it was easy to deliver the test results to cdash, but doing the update stuff would have required modification to the buildbot scripts
<wash[m]>
Yah
<wash[m]>
Sorry for that stuff :p
<wash[m]>
So cmake itself doesn't have the ability to submit to cdash? Thats unfortunate
<wash[m]>
E.g. you'd think that even using buildbot, you would be able to tell cmake to submit build logs/results
<jbjnr>
ctest -D NightlyUpdate NightlyBuod NightlyTest NightlySubmit etc etc does the full set
<wash[m]>
Have y'all looked into alternatives to cdash/ctest?
<wash[m]>
Gotcha
<jbjnr>
Buod=Build
<wash[m]>
jbjnr: but couldn't you just replace the CMake invocations in buildbot with ctest -D whatever?
<jbjnr>
we just do the test+submit steps using ctest. It's been on my list for ages to upgrade the rest
<jbjnr>
yes
<jbjnr>
essentially, but then some of the buildbot collection of results would not work properly. I had a very quick look a couple of years ago, but nobody wanted it, so I didn';t spend long on it
<wash[m]>
Gotcha
<wash[m]>
jbjnr: have you ever looked into Jenkins, etc, as an alternative?
<jbjnr>
we are looking at jenkins at cscs with a possibility of doing the hpx dashboards using it
<jbjnr>
but progress is painfully slow here
<heller>
wash[m]: jenkins would be an alternative to buildbot. not to ctest
<jbjnr>
correct, but in principle, it wuld display, build, update and test results
<wash[m]>
heller: yah, that is what I thought
<wash[m]>
I need something like ctest, because I have multiple different build/test systems that I want to build, test, and submit results to a single dashboard
<jbjnr>
ctest+cdash are perfect for that
<wash[m]>
Jenkins would require me to own the multiple different build/test systems
<wash[m]>
jbjnr: that is what I thought, just wanted to ask around to see if there are any alternatives I should consider
<heller>
CTest is very nice, in principle it let's you script anything
<heller>
with the downside of having to write those scripts in CMake
<jbjnr>
the kitware guys have made tweaks to buildbot that allow them to use buildbot + cdash as an integrated + distributed system.
<heller>
well, in principle, it is very easy
jbjnr_ has quit [Remote host closed the connection]
<heller>
the only buildstep you have is to call ctest with your required option. The downside is, that you don't see the different steps (as we have them now) on the buildbot console
<wash[m]>
Do y'all know if ctest can submit the same results to multiple dashboards?
<jbjnr>
you see them in the cdash dashboard
<jbjnr>
buildbot is shit anyway
<jbjnr>
I'd much rather look at a cdash dashboard
<wash[m]>
jbjnr: it was the best thing at the time
parsa has quit [Quit: Zzzzzzzzzzzz]
<heller>
jbjnr: I concur
<heller>
wash[m]: no
<heller>
wash[m]: why would you want to have that
<jbjnr>
wash[m]: yes
<jbjnr>
but it requires trickery
<heller>
you can?
<jbjnr>
you run ctest -D Experimental Submit
<jbjnr>
then modify the ctest config with a different address
<heller>
i'd assume you have to submit it to a special cdash server that multiplexes to the different dashboards
<jbjnr>
the run it again
<heller>
ah
<wash[m]>
heller: because people at MV would probably be unhappy if logs from internal driver builds were pushed to the public dashboard
<jbjnr>
the config is 5 lines of stuff that can be easilty generated by a cmake script
<jbjnr>
so it is quite "doable"
<wash[m]>
jbjnr: doesn't that run the tests twice though?
<heller>
wash[m]: so you have to configure different dashboards for different projects?
<jbjnr>
no, you run the Submit step twice, but the Test once
<wash[m]>
jbjnr: ah nice
<wash[m]>
heller: one project, two dashboards
<wash[m]>
One public one private
<jbjnr>
but you do have to mahnaually edit the ctest setup config between submit steps and then put it back to the right values when done :)
<jbjnr>
I think you can actually use a command line param to say where to submit to and override the config file
<wash[m]>
Do the Github CI services (appveyor and circleci) run the tests?
<jbjnr>
if you can, select SortByKey time in the drop down combo
<wash[m]>
The dart measurement bit, you mean?
<jbjnr>
yes
<jbjnr>
then zoom out
<wash[m]>
It has to be to stdout, can't be a separate file?
<jbjnr>
it's a bit shit this one, but you get the idea
<jbjnr>
never tried a separate file
<wash[m]>
Does it make plots?
<jbjnr>
might be doable with some tweaks here or there
<jbjnr>
plots = see the link above
<wash[m]>
Nice
<jbjnr>
very crude, but better than nothing
<jbjnr>
jenkins has better timing plots
<heller>
and no automatic reporting
<wash[m]>
jbjnr: now that next step is to have it make better plots, with confidence intervals :p
<wash[m]>
jbjnr: can cdash and Jenkins integrate?
<jbjnr>
if your test dropped a file, then we could easily knock up scripts to do nice plots
<jbjnr>
according to kitware blog, yes cdash + jenkins is doable, but they don't give much detail on the setup
<heller>
the problem I see: CDash is for reporting build and test failures. Jenkins is for running scripts *and* reporting test failures (same goes for buildbot)
<jbjnr>
correct - that link about azure and cdash mentions that they use jenkins for CI control and cdash or display
<heller>
what you can do with Jenkins is to analyze the CTest XML output, and give detailed reports
<heller>
Jenkins can use for example Junit XML files for test reports, and there exists a XSL which transforms CTest XML to Junit XML
<heller>
another option, of course, is to use things like google test or so, which is able to produce XML files for test output as well
<heller>
then you have to have some driver to run your tests
<heller>
it all boils down that there are multiple solutions with overlapping functionalities
<wash[m]>
Ctest could be your driver :p
<heller>
and neither solution is perfect ;)
jbjnr_ has joined #ste||ar
jbjnr_ has quit [Client Quit]
jbjnr_ has joined #ste||ar
<jbjnr_>
test :)
<jbjnr>
test the other way :)
jbjnr has quit [Quit: Leaving]
jbjnr_ has quit [Quit: Going offline, see ya! (www.adiirc.com)]
jbjnr has joined #ste||ar
* jbjnr
slaps heller totally with a ladies handbag
<heller>
jbjnr: wtf?
<jbjnr>
interesting
* jbjnr
hugs heller with a rather large squid
<jbjnr>
sorry. these new IRC clients have very strange options
<jbjnr>
I didn't know what it would do.
<zao>
I wonder if they've bothered fixing the unfortunate shortcuts in Hexchat yet.
<zao>
(there's still Ctrl- keybinds to close the current tab or program)
<zao>
I'm used to Ctrl-W to rub-out words, closes channel irrecoverably in Hexchat :)
<jbjnr>
when the radio starts playing Steely Dan - it's time to go for silence for a bit.
<zao>
I wonder if I should retain the full XML output from runs in the DB, at least for recent runs.
<zao>
Eep, an user in the channel on cpplang slack :)
<zao>
*potential user
hkaiser has joined #ste||ar
K-ballo has joined #ste||ar
<heller>
zao: which channel?
<heller>
diehlpk_work: sooo, your guess is the worst. (modified) Levy distribution tends to be better than my rational polynomial
<heller>
the nice thing about the levy distribution is that it seems to point me in the right direction in terms of optimal grain sizes when I have not enough data points
<zao>
heller: #hpx
<zao>
ah, you found it.
<diehlpk_work>
heller, Ok, interesting
eschnett has quit [Quit: eschnett]
<heller>
diehlpk_work: I'll give some plots in a second
<diehlpk_work>
But these plots are for 2D case or?
<heller>
so far, yes
<diehlpk_work>
Ok, but future results should be in 3d?
<zao>
I'm quite surprised that HPX works properly when you have no other network interfaces than loopback in the session.
<zao>
(well, most tests passed at least :D)
<diehlpk_work>
hkaiser, Any news about the stack overflow issue? I would habe time today to work on it
<heller>
diehlpk_work: with a single starting point, the levy distribution totally overshoots for the optimum, but that's fine, i guess once more data points come into play
<diehlpk_work>
heller, Yes, but Levgi and my function go to zero for x -> inf
<diehlpk_work>
So we will not get the last point which is not zero
<diehlpk_work>
Is this a problem?
<heller>
diehlpk_work: not if you add a displacement to the fitted curve ;)
<heller>
diehlpk_work: you know what my biggest issue was?
<heller>
freaking floating point precision ... with my test data set, I have to indeed normalize the data
<heller>
I think going to zero for x -> inf is not a problem
<diehlpk_work>
Can you do the same plot for b*e^(...) * x
<heller>
sure
<heller>
for levy, I am fitting s * L(x, 0.0, c), btw
<diehlpk_work>
Or a*b * e^(-a / b * x**2) * x
<heller>
I should try the weibull distribution once more...
<diehlpk_work>
I like that the levy distributiion results in good fits, but for understanding the function a easier one would be beneficial
<heller>
ok
<heller>
diehlpk_work: is it (-a/b) * x**2 or -a/(b * x**2)?
<heller>
I am impartial here. in the end, its a trivial thing to fix
<K-ballo>
4 years ago
<jbjnr>
I've got races in my executor that are inexplicable. is dataflow ok? someone said it has problems....
<diehlpk_work>
hkaiser, zao I was able to locate the bug for PeriHPX with the bad parameter or eating memory
<diehlpk_work>
It seems that the bug is related to yaml-cpp
<diehlpk_work>
Using clang + hpx + yaml-cpp and compile in debug mode results in an exception with bad hpx parameter when the first par for loop is started or this loop allocates memory again and again
<hkaiser>
interesting
<hkaiser>
jbjnr: try using the fixing_dataflow branch
<jbjnr>
thanks
<jbjnr>
I'll give it a go
K-ballo has quit [Quit: K-ballo]
<jbjnr>
rats. same segfault
<hkaiser>
hmmm
<jbjnr>
I'm very confused - I've been running tests for several days without problems - then I made one small tweak to a matrix var and since then everything bombs out - even after I undid the change - I've no idea what's happened, but I've got races alost every run now.
<hkaiser>
uhh
K-ballo has joined #ste||ar
aserio has quit [Quit: aserio]
jbjnr has quit [Quit: Going offline, see ya! (www.adiirc.com)]
jbjnr has joined #ste||ar
<diehlpk_work>
hkaiser, HPX_WITH_STACKOVERFLOW_DETECTION=OFF is the correct option or?
<diehlpk_work>
hkaiser, Would you mind when we turn of this option by defualt?
eschnett has joined #ste||ar
<hkaiser>
diehlpk_work: it's off by default for release builds, I believe
<diehlpk_work>
Yes, but for debug?
<diehlpk_work>
It is still not working
<diehlpk_work>
When I turn it off in debug, I got still the false positve
<diehlpk_work>
I will check it tomorrow, today I was working on the debug and yaml-cpp issue