hkaiser changed the topic of #ste||ar to: STE||AR: Systems Technology, Emergent Parallelism, and Algorithm Research | stellar.cct.lsu.edu | HPX: A cure for performance impaired parallel applications | github.com/STEllAR-GROUP/hpx | Buildbot: http://rostam.cct.lsu.edu/ | Log: http://irclog.cct.lsu.edu/ | GSoC2018: https://wp.me/p4pxJf-k1
hkaiser has joined #ste||ar
quaz0r has quit [Ping timeout: 260 seconds]
K-ballo has quit [Quit: K-ballo]
quaz0r has joined #ste||ar
hkaiser has quit [Quit: bye]
diehlpk has quit [Ping timeout: 244 seconds]
aserio has joined #ste||ar
aserio has quit [Quit: aserio]
nanashi64 has joined #ste||ar
nanashi55 has quit [Ping timeout: 265 seconds]
nanashi64 is now known as nanashi55
nikunj97 has joined #ste||ar
twwright has quit [Read error: Connection reset by peer]
<nikunj97>
I wanted to check if the patch made things work out on ppc
<jbjnr>
ok. checking out now. will leave build running whilst I go for coffee
<jbjnr>
report back in a bit
<nikunj97>
jbjnr, sure
mcopik has joined #ste||ar
nikunj97 has quit [Quit: Leaving]
<jbjnr>
nikunj[m]: I used cmake -DHPX_WITH_DYNAMIC_HPX_MAIN=ON . and now hello world runs correctly. Nice one.
nikunj[m] has quit [Ping timeout: 276 seconds]
anushi has quit [Remote host closed the connection]
anushi has joined #ste||ar
david_pfander has joined #ste||ar
nikunj[m] has joined #ste||ar
<nikunj[m]>
@jbjnr, that's good to hear that hpx now runs well on powerpc as well
<nikunj[m]>
Did you run tests as well?
<jbjnr>
yes
<jbjnr>
94% tests passed, 35 tests failed out of 573
<jbjnr>
The same ones failing as I had with the DYNAM?/ic MAIN off
<jbjnr>
^oops
<nikunj[m]>
@jbjnr then I don't think it has something to do with my code
<jbjnr>
nope. it's something fishy on powerpc that doesn't happen on other linux falvours
<jbjnr>
^flavours
<jbjnr>
I will investigate what's going on.
<nikunj[m]>
@jbjnr if you find out anything related to my implementation please let me know
<jbjnr>
of course.
<jbjnr>
I suspect a race condition that was previously unknown
<nikunj[m]>
@jbjnr could you please try building phylanx if possible
<nikunj[m]>
@jbjnr that's odd
<jbjnr>
not phylanx. I have too much work to do to get involved with another project
<jbjnr>
got deadlines to meet here
<jbjnr>
sorry.
<nikunj[m]>
@jbjnr no worries
mcopik has quit [Read error: Connection reset by peer]
<jbjnr>
race condition because we will be using 160 threads and most other tests are using 8 or 16. Quite possible there's a problem in the scheduling somewhere or some place in parallel:algorithms that is not triggered frequently
<Chewbakk_>
Hi, we have a few questions regarding the distribution policy of partitioned vectors: Does the default distribution policy distributes the vector blockwise? If yes, is it possible to control which locality own which block chunk? I am also asking this because I read that it could also be partitioned in a round robin manner
<jbjnr__>
I believe you can supply your own distribution policy
<jbjnr__>
but I've never used the partitioned vectors so I can't recall the details
Chewbakka has quit [Ping timeout: 240 seconds]
<hkaiser>
Chewbakk_: I think the default is to distribute the blocks round robin
<Chewbakk_>
This seems like a common scenario to me
<hkaiser>
Chewbakk_: what would you like to achieve?
anushi has quit [Remote host closed the connection]
anushi has joined #ste||ar
<Chewbakk_>
For given k localities and n the size of the partitioned vector: Locality 0 owns the first k data entries, locality 1 the next k data entries, ...
<hkaiser>
Chewbakk_: that is the default, yes
anushi has quit [Remote host closed the connection]
<hkaiser>
the blocks (not elements) are distributed round robin, by default to as many localities as there are connected to the application
<hkaiser>
one block per locality
<Chewbakk_>
ah perfect, thank you
anushi has joined #ste||ar
anushi has quit [Remote host closed the connection]
<hkaiser>
Chewbakka: is everything built using Debug?
<hkaiser>
i.e. your library too?
<hkaiser>
how do you build your libarry/executable?
<hkaiser>
cmake?
<Chewbakka>
which libriaries e.g. boost? We only set the cmake flag in our application
<Chewbakka>
yes with cmake
<hkaiser>
you should build HPX using Debug as well
<Chewbakka>
mh ok, that will take a while
<Chewbakka>
thank you
jaafar has joined #ste||ar
galabc has joined #ste||ar
Chewbakka has quit [Quit: Leaving...]
nikunj has joined #ste||ar
mbremer has joined #ste||ar
<nikunj>
hkaiser, yt?
<hkaiser>
here
<mbremer>
Hey guys, is hpx master broken at the moment? I pulled and rebuilt a docker container recently and keep getting errors like "hpx::init: can't initialize runtime system more than once! Exiting..."
<nikunj>
hkaiser, wrapping main works fine on ppc as well
<nikunj>
mbremer, did you try calling hpx::init from main after including hpx_main?
<hkaiser>
mbremer: all should be well since yesterday, when did you pull last?
<mbremer>
I rebuilt today. Let me look at the commit to be sure.
<github>
[hpx] hkaiser created integrate_hpxmp (+1 new commit): https://git.io/fNLAw
<github>
hpx/integrate_hpxmp c6df77a Hartmut Kaiser: Adding build system support to integrate hpxmp into hpx at the user's machine
<github>
[hpx] hkaiser opened pull request #3377: Adding build system support to integrate hpxmp into hpx at the user's machine (master...integrate_hpxmp) https://git.io/fNLAM
<nikunj>
hkaiser: since the error "hpx::init: can't initialize runtime system more than once! Exiting..." would not explain much if hpx_main is included and hpx_init is called once. So I think I should add another check if hpx_main is included and then hpx_init is called and print out an error corresponding to it (something like hpx system is already initialized from main. Remove hpx_main.hpp to use hpx_init functionality)
<nikunj>
do you agree?
<hkaiser>
nikunj: if you think you can diagnose that, sure - would be absolutely appreciated
quaz0r has quit [Ping timeout: 264 seconds]
<nikunj>
hkaiser, added code specific to it. Testing it currently
anushi has quit [Read error: Connection reset by peer]
anushi has joined #ste||ar
quaz0r has joined #ste||ar
<nikunj>
hkaiser: to achieve the above error symbol I will have to add another weak symbol (same as that of hpx_wrap.cpp) to libhpx_init.a
<hkaiser>
ok
<hkaiser>
let's keep this change independent of the current PR
<nikunj>
wait no
<nikunj>
I think I might be missing something
<nikunj>
hkaiser: ok I will keep things independent of the current stable pr