K-ballo changed the topic of #ste||ar to: STE||AR: Systems Technology, Emergent Parallelism, and Algorithm Research | stellar.cct.lsu.edu | HPX: A cure for performance impaired parallel applications | github.com/STEllAR-GROUP/hpx | Buildbot: http://rostam.cct.lsu.edu/ | Log: http://irclog.cct.lsu.edu/
<gnikunj[m]>
Strange. Let me look into it. It worked fine for me on rostam.
<hkaiser>
this is on rostam
<gnikunj[m]>
:/
<hkaiser>
same on daint, btw
<gnikunj[m]>
why is plain_async_distributed failing. It doesn't even use distributed resiliency APIs
<hkaiser>
shrug, I have not looked
<gnikunj[m]>
aah it's the assertion that's failing f_nodes < locales.size()
<gnikunj[m]>
how many nodes is it running on?
<hkaiser>
you may ignore those not related
<hkaiser>
I set them to run on 2 localities
<gnikunj[m]>
it should work then. f_nodes is set to 1 and if its run on 2 localities locales.size should return 2.
<hkaiser>
I think I did :/
<hkaiser>
ahh, got it, I didn't set the localities for the perf tests
<gnikunj[m]>
1d stencil code seems to be seg faulting. An indication that I updated the wrong code. I should get the correct one from Loni and update the example. It should work then.
<gnikunj[m]>
hkaiser: aah that's why
<hkaiser>
I'll do the build system fixes
<gnikunj[m]>
yes, I'll update the 1d stencil codes so it doesn't seg fault
<hkaiser>
thanks
<gnikunj[m]>
hkaiser: what is this parameter: HPX_WITH_ASYNC_MPI?
<hkaiser>
gnikunj[m]: that enables the async mpi module (executor)
<gnikunj[m]>
does it mean that async will use MPI for actions if it is turned on? If yes, then why do we have HPX_WITH_PARCELPORT_MPI?
<gnikunj[m]>
replacing that with std::vector<hpx::id_type>(1, hpx::find_here()) makes the code work just fine. Initially I thought it could be that locality.size() is less than the value of counter that's passed to operator[] but that's not the case either. It's a strange behavior and I'll see to it tomorrow morning.
<weilewei>
Oh, freenode IRC now has a small overview windows for the posted links now