hkaiser changed the topic of #ste||ar to: The topic is 'STE||AR: Systems Technology, Emergent Parallelism, and Algorithm Research | stellar.cct.lsu.edu | HPX: A cure for performance impaired parallel applications | github.com/STEllAR-GROUP/hpx | Buildbot: http://rostam.cct.lsu.edu/ | Log: http://irclog.cct.lsu.edu/
<nikunj> hkaiser, yt?
K-ballo has quit [Quit: K-ballo]
<nikunj> does hpx intializes the mpi runtime on hpx::init as well (considering the use of mpi parcelport)?
<hkaiser> nikunj: yes
<nikunj> that helps with flecsi initializations then
<hkaiser> but that should not conflict with any MPI initialization the application might do itself
<nikunj> also, I was going through your code and you use resource practitioner. why?
<hkaiser> I created two thread pools, one for MPI (1 core) and one for the rest
<nikunj> yes. I can't understand the reasoning behind it
<hkaiser> the idea was to separate all MPI operations the application might do from the actual non-MPI related computation
<nikunj> aah! makes sense
<hkaiser> MPI calls might block, this approach makes sure the HPX threads do not stall the core
<nikunj> so if I let hpx initialize the mpi runtime, I'll be fine in theory. Right?
<hkaiser> yes
<nikunj> great, that lifts some load for starters. I've been debugging flecsi the last few days. Now it's time for some additions
<nikunj> I think I know what to do now.
<hkaiser> nice!
<hkaiser> thanks for pushing this forward!
<nikunj> I should be able to get something by 2nd January. Not sure about the coloring things, will have to ask Rod. I'll start with execution part once I'm done with runtime
<hkaiser> nod
<hkaiser> coloring is just a fancy name for MPI ranks
<hkaiser> I think
<nikunj> they changed it with the refactor
<hkaiser> ok
<nikunj> it's no longer based on MPI ranks
<nikunj> they added colors and processes. processes are equivalent of MPI ranks while color are something else
<nikunj> more like ranks within a process
<hkaiser> ahh, so more like data partitions
<hkaiser> makes sense
<nikunj> yes, I'll have to see how I can compute that, so I'm leaving that for now
<nikunj> handling the easier parts
<hkaiser> sure, let's start simple
<nikunj> how do I get rank and size of the mpi initializations in hpx?
<nikunj> hkaiser ^^
<hkaiser> either use the mpi api of the hpx one ;-)
<hkaiser> hpx::get_locality_id() == rank, hpx::get_num_localities() == size
<nikunj> ohh yeah
<nikunj> true
<nikunj> my bad
nikunj has quit [Ping timeout: 265 seconds]
hkaiser has quit [Quit: bye]
nikunj has joined #ste||ar
<nikunj> is hpx master broken?
nikunj has quit [Ping timeout: 258 seconds]
K-ballo has joined #ste||ar
hkaiser has joined #ste||ar
nikunj has joined #ste||ar
<nikunj> hkaiser, yt?
<nikunj> hkaiser, master seems to be broken with mpi parcelport: https://gist.github.com/NK-Nikunj/21b03515087f19e3828bfd28d57392f7
<hkaiser> nikunj: doesn't look like something mpi specific
<nikunj> without turning on the mpi parcelport, it builds just fine
<nikunj> so I thought, it may have something to do with it
<hkaiser> uhh
<hkaiser> I rather think it's because of this: -- Performing Test HPX_WITH_CXX11_ATOMIC_128BIT - Failed
<nikunj> come to think of it, HPX_WITH_CXX11_ATOMIC was not found
<nikunj> in the cmake config
<hkaiser> right, that's what I said
<hkaiser> let me have a look
<hkaiser> what system are you on?
<nikunj> it's fedora
<nikunj> it's a container
<nikunj> [stellar@0be62defaec5 flecsi]$ uname -r
<nikunj> 4.9.184-linuxkit
<zao> Is this the full output from a fresh CMake run, seems to not mention compilers and stuff up top?
<hkaiser> that should do the trick, I remember I removed it at some point, somehow it got back in
<nikunj> zao, I'm not sure if it was fresh. iirc, it was a fresh one
<nikunj> hkaiser, let me try
<nikunj> ohh they're because of reuse of those arguments
<nikunj> let me correct them as well
<nikunj> hkaiser, removing https://github.com/STEllAR-GROUP/hpx/blob/master/hpx/runtime/threads/policies/thread_queue_mc.hpp#L65-L68 would mean that thread_queue_mc is no longer a template
<nikunj> and you're using templated arguments within the class
<nikunj> and you can't get rid of thread_queue_type since they're used later in the program
<nikunj> hkaiser, getting rid of templates expose more errors https://gist.github.com/NK-Nikunj/21b03515087f19e3828bfd28d57392f7#file-new-make-output
<hkaiser> nikunj: give me a sec
<hkaiser> nikunj: I'll have it fixed in a sec
<nikunj> hkaiser, sure
<hkaiser> nikunj: see #4289
<nikunj> so it was cast issue after all?
<nikunj> hkaiser ^^
<hkaiser> the use of a type that is defined only if 128 bit atmics are available
<hkaiser> thias was fixed already, but John reintroduced it ...
<nikunj> what type are we talking about and what is it required for?
<hkaiser> the lifo_lockfree queue backend
<nikunj> when did we have lockfree queues?
<nikunj> I thought that was a gsoc project
<hkaiser> we've had those for ever
<nikunj> then what new lockfree data structures were we looking out for?
<nikunj> I do remember a gsoc project on adding lockfree data structures to hpx
<hkaiser> other lockfree containers
<hkaiser> like hashmaps
<nikunj> aah, I see
<nikunj> hkaiser, it works now. Thanks!
<hkaiser> nikunj: ok, pls comment on the PR
<nikunj> yes hold on