aserio changed the topic of #ste||ar to: STE||AR: Systems Technology, Emergent Parallelism, and Algorithm Research | stellar.cct.lsu.edu | HPX: A cure for performance impaired parallel applications | github.com/STEllAR-GROUP/hpx | Buildbot: http://rostam.cct.lsu.edu/ | Log: http://irclog.cct.lsu.edu/
hkaiser_ has joined #ste||ar
hkaiser has quit [Ping timeout: 240 seconds]
hkaiser_ has quit [Client Quit]
hkaiser has joined #ste||ar
<hkaiser>
K-ballo: should we deprecate boost::begin/end now?
<K-ballo>
hkaiser: yes, but the replacement situation is tricky
<hkaiser>
what's up there?
<K-ballo>
core should use util::, tests and examples should use std:: (with using std:: in the unlikely case ADL is intended)
<hkaiser>
what's the different between util::begin/end and std::begin/end ?
<K-ballo>
util:: does ADL, like boost::swap does
<hkaiser>
k
<K-ballo>
that's something that we want to do in implementation when we work with generic ranges, but users shouldn't generally
<hkaiser>
nod makes sense
bikineev has quit [Remote host closed the connection]
zbyerly_ has joined #ste||ar
EverYoung has quit [Ping timeout: 246 seconds]
EverYoung has joined #ste||ar
zbyerly_ has quit [Remote host closed the connection]
zbyerly_ has joined #ste||ar
hkaiser has quit [Quit: bye]
K-ballo has quit [Quit: K-ballo]
zbyerly_ has quit [Remote host closed the connection]
<jbjnr>
zao: in this case heller really did do all the work - which makes it all the worse that we get nothing from it.
<zao>
jbjnr: I see that someone is insane enough to try the IB PP. Nice.
<jbjnr>
I think you made a spelling error : cool enough - is what you meant to write
<zao>
Indeed :)
<zao>
Would be fun to unleash it on our IB some day, but that means getting around to it.
<jbjnr>
wait till I fix it as well
<mcopik>
is there a better way for accessing local data in work dispatched to other localities, other than storing it inside a serializable container and 'capturing' the container inside the function object?
<jbjnr>
what you are asking is a long-winded way of saying pass it as an argument to the action?
<jbjnr>
if so, then ...
<jbjnr>
it might be better to use a component
<jbjnr>
well, no
<mcopik>
yes, an argument of an action
<jbjnr>
currently the best way is to make it serializable - try to us serialize_buffer if you can for zero copy optimizations. I've added early RMA support on a branch and one day soon you'll be able to declare an rma_object<T> on node A and on node B call rma_id.put(stuff) to write from a T one one node directly to another.
<jbjnr>
but passing as an arg is the easist way for now
<jbjnr>
^easiest
denis_blank has joined #ste||ar
bikineev has joined #ste||ar
<mcopik>
jbjnr: thanks!
<mcopik>
ajaivgeorge_: ^
bikineev_ has joined #ste||ar
bikineev has quit [Ping timeout: 260 seconds]
bikineev_ has quit [Ping timeout: 246 seconds]
Matombo has quit [Remote host closed the connection]
<ajaivgeorge_>
hkaiser: I see that there is a HPX tutorial at SC17. If I get selected for the HPC for undergraduates program we will definitely meet there.
<ajaivgeorge_>
Also anyone attending ICPP at bristol in August?
<hkaiser>
ajaivgeorge_: cool
<taeguk>
Are there no Projection for binary predicates? It seems that only projection is used for only unary predicates in parallel algorithms.
<hkaiser>
taeguk: yes, I think so - need to verify this, though
<hkaiser>
taeguk: pls use the Ranges TS as a guideline
david_pfander has quit [Ping timeout: 255 seconds]
ajaivgeorge_ has joined #ste||ar
ajaivgeorge has quit [Read error: Connection reset by peer]
patg[w] has joined #ste||ar
<patg[w]>
heller: what flag do I use to get cmake to find jemalloc, I tried JEMALLOC_DIR & ROOT then tried spec. JEMALLOC_LIBRARY and INCLUDE_DIR because those were the two that it cannot find
<diehlpk_work>
Where there changes in the docker configuration for hpx?
denis_blank has quit [Quit: denis_blank]
<patg[w]>
heller: never mind of course I had a typo
EverYoung has joined #ste||ar
EverYoung has quit [Remote host closed the connection]
EverYoung has joined #ste||ar
bikineev has joined #ste||ar
akheir has quit [Remote host closed the connection]
akheir has joined #ste||ar
vamatya has joined #ste||ar
mcopik has joined #ste||ar
patg[w] has left #ste||ar ["Leaving"]
akheir has quit [Read error: Connection reset by peer]
pree has quit [Quit: AaBbCc]
EverYoung has quit [Ping timeout: 246 seconds]
EverYoung has joined #ste||ar
akheir has joined #ste||ar
mcopik has quit [Ping timeout: 268 seconds]
vamatya has quit [Ping timeout: 268 seconds]
akheir has quit [Remote host closed the connection]
<github>
[hpx] hkaiser created fixing_hdf5_examples (+3 new commits): https://git.io/vQKBe
<github>
hpx/fixing_hdf5_examples caa0286 Hartmut Kaiser: Fixing build system for examples using HDF5
bikineev has quit [Remote host closed the connection]
Reazul has joined #ste||ar
<Reazul>
Hi, I had a performance related query about launching embarrasingly parallel task using HPX. I am curious to know the best medium to share my code (very simple). Thanks
<K-ballo>
some pastesite, pastebin? gist?
<K-ballo>
whatever the cool kids use these days
<Reazul>
Thanks. I modified one of the simple examples for the following code. I am trying to launch EP tasks and measure the time required. I see that it is taking longer then normal and want to know how this can be improved. Pastebin link: https://pastebin.com/5kP1fWsy
<K-ballo>
Reazul: add some information on how you are running/measuring
<K-ballo>
my first intuition would be to ask whether you are running with more than 1 core
eschnett has quit [Quit: eschnett]
<Reazul>
I cloned it from SVN, using gcc 7.1.0, openmpi 2.1.1, boost version 1.58. I tried it withbith --hpx:threads 20 and 1
<Reazul>
I just want to make sure that this is the best way to launch EP tasks in HPX.
<hkaiser>
Reazul: are you sure that rand() is not protected by a mutex internally?
<hkaiser>
this would nicely sequentialize all of your code
<hkaiser>
what's a EP task?
<hkaiser>
embarassingly parallel?
<Reazul>
Embarrasingly Parallel
<Reazul>
Yes.
<hkaiser>
you can do it as you did or - better even - use hpx::parallel::for_each or hpx::parallel::for_loop
<hkaiser>
(or any of the other algorithms for that matter
<Reazul>
Thanks.
<hkaiser>
also, not sure why you need the foreman
<hkaiser>
(you probably took the hello_world example as a starting point)
<Reazul>
There shouldn't be a performance penalty the way I did, right?
<Reazul>
yes, you are correct I modified the hello world example
<hkaiser>
well, except for rand() either not being thread safe or (if it is thread safe) sequentializing the random number generation
<hkaiser>
the foreman in the hello_world example is used to make sure threads run on a particular core - something you don't need
<hkaiser>
but yah, I see what you're trying to do
<hkaiser>
looks ok to me
<Reazul>
Thanks. I wanted to run it by HPX experts.
<hkaiser>
Reazul: you should be able to get rid of the whole attendance dance
<hkaiser>
no reason to have that in your case
<Reazul>
That would be nice.
<Reazul>
Any example?
<hkaiser>
just remove it
<Reazul>
I see.
<hkaiser>
also no need to launch the ep_worker through an action
<hkaiser>
use actions only if you want to go over the wire (i.e. launch something on a different locality)