hkaiser changed the topic of #ste||ar to: STE||AR: Systems Technology, Emergent Parallelism, and Algorithm Research | stellar.cct.lsu.edu | HPX: A cure for performance impaired parallel applications | github.com/STEllAR-GROUP/hpx | Buildbot: http://rostam.cct.lsu.edu/ | Log: http://irclog.cct.lsu.edu/ | GSoD: https://developers.google.com/season-of-docs/
K-ballo has joined #ste||ar
K-ballo has quit [Quit: K-ballo]
nikunj has joined #ste||ar
nikunj97 has joined #ste||ar
nikunj has quit [Ping timeout: 268 seconds]
nikunj97 has quit [Read error: Connection reset by peer]
jbjnr_ has joined #ste||ar
<simbergm> jaafar: ah, continuation is an overloaded term
<simbergm> I think you got the right idea for dataflow
<simbergm> (including that dataflow(fork, ...) may not make much sense)
<simbergm> I just wanted to provide some context for the terminology
<heller> ./hpx/config/config_strings.hpp:12:10: fatal error: 'hpx/config/config_defines_strings_modules.hpp' file not found
<heller> anyone else ran into that?
jbjnr_ has quit [Ping timeout: 276 seconds]
<heller> nvm. got it
jbjnr_ has joined #ste||ar
K-ballo has joined #ste||ar
<hkaiser> heller: that was my fault/merge problem - should be fine now
<heller> hkaiser: the problem seemed to be that the skeleton creation script didn't have those changes in
<hkaiser> ahh
<hkaiser> ok
<hkaiser> then it's my fault as well ;-)
aserio has joined #ste||ar
<hkaiser> aserio: can't get into webex :/
<hkaiser> g'morning, btw
<aserio> Morning
<aserio> What do you mean?
<hkaiser> incorrect email or password
<hkaiser> I used the correct password, I think
<hkaiser> so what email do I use?
<aserio> It should be your LSU email
<aserio> though do you need to to join a meeting?
<hkaiser> we have the coordination meetin gnow
<aserio> Shouldn't you be able to just click the link and join?
<aserio> now?
<hkaiser> no, it asks questions
<aserio> in 8 minutes right?
<hkaiser> I thought, yes
<hkaiser> no, can't get in
<aserio> I have just started the meeting
<hkaiser> ok, joined as guest
<aserio> Yea you should be able to join via this link: https://lsucct.webex.com/lsucct/e.php?MTID=md44da149de4f61889d3216829a6c24d8
<hkaiser> simbergm, rori: will you join?
<simbergm> hkaiser: yep, sorry
hkaiser has quit [Ping timeout: 245 seconds]
jbjnr_ has quit [Ping timeout: 246 seconds]
hkaiser has joined #ste||ar
aserio has quit [Ping timeout: 264 seconds]
K-ballo1 has joined #ste||ar
K-ballo has quit [Ping timeout: 245 seconds]
K-ballo1 is now known as K-ballo
<heller> you can't use move only types in omp task regions :(
jbjnr_ has joined #ste||ar
K-ballo1 has joined #ste||ar
K-ballo has quit [Ping timeout: 268 seconds]
K-ballo1 is now known as K-ballo
<heller> works!
<heller> jbjnr_: ^^
aserio has joined #ste||ar
<hkaiser> heller: why not use an executor?
<heller> because executors on the execution context stuff hasn't been implemented yet :P
<heller> but yes, that would be the end goal
<hkaiser> ok, cool
<heller> no hpx_init or hpx_main though
<heller> everything just like tha
<heller> t
<heller> hkaiser: and MPI async send/recv with hpx::future: https://gist.github.com/sithhell/50fdbc934aeb4868b5cb273fe3a7515e
<heller> credits go to jbjnr_
<heller> futurize ALL the things
<hkaiser> lol
<hkaiser> that's my line ;-)
<heller> :P
<heller> I am stealing everything :P
<hkaiser> feel free!
<hkaiser> heller: but why hpx::mpi::invoke and not hpx::mpi::async?
<heller> hkaiser: async has the connotation of an RPC, at least for me
<hkaiser> or even hpx::async(hpx::mpi::executor, ...
<heller> which that one isn't
<heller> this is just wrapping the async MPI functions to return a future
<hkaiser> async has nothing in common with RPC
<heller> it has in our context
<hkaiser> it could be RDMA as well, or anything else
<hkaiser> it simply asynchronously does something
<heller> asynchronous operations: yes
<hkaiser> no
<hkaiser> it asynchronously invokes an action
<heller> std::async and hpx::async launch tasks. hpx::async can launch tasks remotely
<hkaiser> no, it launches an action which happens to do remote things
<heller> I wouldn't overload terms here
<heller> in my book, an action is all about doing RPC
<hkaiser> it could do RDMA or trigger a local operation
<heller> well, a procedure does those things, true
<heller> however, to make MPI action aware, it requires far more things
<heller> and that's not the point here ... the point is more about how to be able to use futures with thin layers over exisiting software ecosystems
<hkaiser> well, sure
<heller> I think the first step is to show that HPX as its own software ecosystem, is capable enough to adapt to other, existing, or even emerging ones
<heller> to not only open up a migration path, but to use its different modules as building blocks without imposing too much
<heller> does this make sense?
<hkaiser> sure
<hkaiser> btw stackfull vs. stackless
<hkaiser> 500000 hpx threads: 2.6s vs. 2.4s
<hkaiser> as expected, 10%% improvment
<heller> 8 :P
<heller> can you compare the numbers to master as well please?
jbjnr_ has quit [Ping timeout: 252 seconds]
<hkaiser> yah, next on my list
<hkaiser> heller: master is at 2.55s
<heller> hkaiser: interesting
<hkaiser> heller: this is obviously no thorough analysis yet
<hkaiser> just a quick manual run
<heller> hkaiser: I think the stackless tasks will really shine after streamlining the scheduling loop and task states
<hkaiser> absolutely
<hkaiser> needs more work, definitely
<heller> I think they might even be a perfect fit for a custom execution agent/execution context
<heller> which should help with that
jbjnr_ has joined #ste||ar
hkaiser has quit [Ping timeout: 250 seconds]
aserio has quit [Quit: aserio]
hkaiser has joined #ste||ar
<hkaiser> heller: those _are_ a different agent
<heller> hkaiser: exactly. they almost share nothing with thread_data ;)
<hkaiser> thread_data ?
<hkaiser> what do you mean?
<heller> that your implementation of stackless tasks derive from thread_data, or did I misunderstand something?
<hkaiser> they do, yes
<heller> so what I wanted to say, we have thread_data representing one agent, and the stackless ones representing another
<hkaiser> heller: hmmm, not really
<hkaiser> thread_data_stackfull is one thread_data_stackless the other, both are derived from thread_data
<heller> sure, that's how you implemented it
<heller> I'd argue however, that they shouldn't share the same base class
<hkaiser> only implementation sharing, a) didn't want to copy, b) wanted to keep the state management non-virtual
<heller> hkaiser: https://github.com/STEllAR-GROUP/hpx/commit/6c43b1a005bacead97c14c54fcd24c9fe0f3e52d <-- this is what's needed for the openmp/mpi stuff to work, btw
<heller> still an early stage
<hkaiser> I think you're recreating the executor interface here
<heller> partially, yes
<hkaiser> post() is an executor API function
<hkaiser> why, then?
<heller> still two different things
<hkaiser> k
<heller> a context should expose a default executor eventually
<heller> that is, a context needs to be able to spawn agents
<heller> the post function is a strawman, didn't come up with a better name
<hkaiser> sure
<hkaiser> executors are lihgtweight wrappers for contexts, so this is probably ok
<heller> but you should be able to instantiate different executors using the same context
<heller> right
<heller> executors are the API to spawn agents on contexts
<hkaiser> nod
<heller> closing the circle ... stackless tasks could be represented by their own context
<heller> spawning their specific agents
<hkaiser> nobody has said that contexts couldn't spawn various agents
<hkaiser> and I think there is a use for that
<heller> hmm
<heller> having a one to one relationship, would simplify the API
<heller> since the context carries the information on what agents to spawn, there's no need for additional parameters to distinguish between the agents to spawn and we don't impose the requirement for diferent contexts to support spawning of various agents
<hkaiser> so even different stack sizes would imply using different contexts?
<heller> good question
<heller> so would that rather be a property of the executor then?
<heller> how do we handle different stack sizes for locally launched tasks right now?
<hkaiser> it's a parameter to register_thread
<hkaiser> heller: well, it might be a property of the executor, but even then you have to have a way to pass it on to the context
<heller> right, but is a different stack size really a different agent?
<heller> isn't stacksize a different thing than stackless vs. stackfull?
<hkaiser> same for priorities, scheduling-hints, etc.
<hkaiser> shrug
<hkaiser> I could imagin gthat a user might want to mix stackfull and stackless in the same paralell context/region
<heller> the problem I have with the stack size is that they essentially leak implementation details
<heller> stackless/stackfull in parallel regions: yes, probably
<hkaiser> hmm, it's similar to priorities, no?
<heller> I don't think so, different priorities for different tasks can stem from algorithmic need
<heller> s
<heller> stack sizes are a limitation of the underlying coroutine machinery
<heller> what do stack sizes mean for stackless tasks?
<hkaiser> pthreads have an API for stacksizes as well
<heller> windows fibers don't
<hkaiser> they do
<heller> ok
<heller> std::thread doesn't, at least
<hkaiser> yah, because its meant to be platform independent
<hkaiser> anyways
<heller> let's sleep over it a few nights...
<hkaiser> right
<hkaiser> in any case, your stuff is a game changer
<heller> let's see how the acceptance is tomorrow
<heller> potentially a game changer, we just have to carry it through
<heller> plus, I still have to check how it relates to the whole new future API that eric bryce and david are cooking up
<hkaiser> their stuff is lower level
<heller> even orthogonal to this
<hkaiser> right
<hkaiser> it's a infrastructure to implement things like futures considering heterogeneous contexts/executors
<heller> nod
jbjnr_ has quit [Ping timeout: 264 seconds]
hkaiser has quit [Ping timeout: 240 seconds]