aserio changed the topic of #ste||ar to: STE||AR: Systems Technology, Emergent Parallelism, and Algorithm Research | stellar.cct.lsu.edu | HPX: A cure for performance impaired parallel applications | github.com/STEllAR-GROUP/hpx | Buildbot: http://rostam.cct.lsu.edu/ | Log: http://irclog.cct.lsu.edu/
<diehlpk>
heller, Did you get some response from the OpenSuCo paper?
diehlpk has quit [Remote host closed the connection]
<jbjnr>
I will start chasing people up at CSCS to find out about feasibility etc of using our Jenkins for the above
<heller>
jbjnr: sounds good!
<jbjnr>
pleas add more things. I must have forgotten stuff.
<heller>
do we want to test the CUDA stuff?
<heller>
intel compiler?
<jbjnr>
also comoilers, boost versions etc. Should MSVC also be an "essential' ?
<jbjnr>
intel. good point
<heller>
what about the millions of cmake options with different dependencies?
<jbjnr>
indeed. What do we do. How many builds can we realistically imagine?
<hkaiser>
could we cycle through a set of options?
<jbjnr>
good idea
<heller>
test one set each night?
<hkaiser>
yah
<jbjnr>
or perhaps have N cmake setting randomly combined on each build!
<heller>
na, it has to be reprodicable
<jbjnr>
ok
<jbjnr>
anyway, add stuff to the doc
<heller>
also problematic for a PR
<hkaiser>
well, as long as we can fugure out what was used random is fine
<heller>
it might not test the stuff that has been proposed
<jbjnr>
hkaiser: you missed the discussion, but we spoke to my boss and agreed that if CSCS can provide it, then I can approach them with potential requirements and see if they can take over the CI
<hkaiser>
cool!
<jbjnr>
they might not be able to actually deliver, but we'll see
<hkaiser>
main benefits: a) we know Klaus, b) we can easily create an HPX backend
<jbjnr>
the benchmarks are fine, but the problem is that the blas level 1,2 are too easy and without the eigensolvers and stuff it's useless. I like the api and would use it anyway, but the real linear algebra people are doing this kind of stuff. http://www.icl.utk.edu/files/publications/2017/icl-utk-980-2017.pdf
<jbjnr>
this is where (one day) our hpx backend for linear algebra could go.
<hkaiser>
ok
<hkaiser>
welll, nothing goes without involving Dongarra in the field of LA
pree has quit [Read error: Connection reset by peer]
<hkaiser>
at least distributed LA
<jbjnr>
if I have a task on a queue and I want to get the arguments as a tuple, is it possible?
<jbjnr>
(inside the scheduler). there must be some task structure that holds them
<hkaiser>
jbjnr: that's not possible
<jbjnr>
how does invoke get them?
<hkaiser>
the arguments are bound to the function using util::bind()
<K-ballo>
deferred_call ?
<hkaiser>
even more, the actual hpx-thread has a fixed API
<jbjnr>
can it be rebound to another function somehow
<K-ballo>
(we can't use bind with user defined arguments)
<hkaiser>
K-ballo: sure
<hkaiser>
jbjnr: sec
<jbjnr>
I'm working on numa placement and I' have many many tasks, each is an operation on some matrix tiles, inside the scheduler, I have a way of quwerying a memory address and getting the numa placement. I would like to pull the arguments to the task and forward them to a helper function that does the numa check, before the task is added to the queue
<jbjnr>
so I can move it to the queue on the right numa domain
<jbjnr>
I have a teomplated guided_pool_executor that holds the helper function
<jbjnr>
this is all, good, but I do not know how to call my helper with my task stuff
<jbjnr>
(imagein calling the task twice, but once into my helper, the second time for the real task)
<jbjnr>
The helper takes the same args as the task/ontinuation/etc
<hkaiser>
wrap your function into a helper function which is doing things?
<hkaiser>
before actually being invoked
<jbjnr>
then it will only be called after it is in the scheduler and executed - then it might be on the wrong numa donain - need to be done earlier
<heller>
The configuration of the the workers, builds and repositories is all through human readable text files (json and cmake), automatically updated by github commits
<heller>
The builds on the workers will be started via ssh
<heller>
I think I can finish a first version by the end of the week
EverYoung has quit [Ping timeout: 255 seconds]
<K-ballo>
where are the builds?
<heller>
K-ballo: not implemented yet
pagrubel has quit [Remote host closed the connection]
patg[w]_ has joined #ste||ar
EverYoung has joined #ste||ar
hkaiser has quit [Read error: Connection reset by peer]
hkaiser has joined #ste||ar
diehlpk has joined #ste||ar
aserio has joined #ste||ar
eschnett has quit [Quit: eschnett]
hkaiser has quit [Read error: Connection reset by peer]
eschnett has joined #ste||ar
diehlpk has quit [Remote host closed the connection]
diehlpk has joined #ste||ar
hkaiser has joined #ste||ar
jbjnr has quit [Quit: ChatZilla 0.9.93 [Firefox 56.0/20170926190823]]
<jbjnr>
this is called from code that is going on during the scheduling of a continuation etc etc - would adding a dataflow in here not cause problems?
<jbjnr>
actually - I couldn't put it there because the args are hidden inside the closuer
<jbjnr>
^closure
<hkaiser>
jbjnr: right, create a wrapping executor which dispatches to the pool executor
<jbjnr>
incidentally hkaiser if the API changing from future to non unwrapped future was a real problem - then inside the forwarding executor after the dataflow - once could re-wrap them into make_ready futures. (just btw)
<hkaiser>
jaafar: yah, I though about that - and we can do it more efficiently, if needed
<jbjnr>
who is this jarjar binks character anyway ...
pree has joined #ste||ar
patg[w]_ has quit [Ping timeout: 255 seconds]
pagrubel has joined #ste||ar
<hkaiser>
darn autocomplete
pree has quit [Ping timeout: 248 seconds]
hkaiser has quit [Quit: bye]
eschnett has quit [Quit: eschnett]
jbjnr_ has joined #ste||ar
jbjnr has quit [Ping timeout: 246 seconds]
jbjnr_ is now known as jbjnr
eschnett has joined #ste||ar
eschnett has quit [Quit: eschnett]
eschnett has joined #ste||ar
eschnett has quit [Ping timeout: 240 seconds]
eschnett has joined #ste||ar
jakemp has joined #ste||ar
eschnett has quit [Quit: eschnett]
pagrubel has quit [Ping timeout: 240 seconds]
hkaiser has joined #ste||ar
<aserio>
hkaiser: yt?
<hkaiser>
here
<hkaiser>
aserio: ^^
diehlpk has quit [Remote host closed the connection]
<aserio>
What objects can capture a nil?
<aserio>
hkaiser: I am seeing this-> what(): primitive_result_type does not hold a literal value type: HPX(bad_parameter)
diehlpk has joined #ste||ar
<hkaiser>
aserio: right - something is trying to interpret a primitive_result_type which is empty (nil)
<aserio>
hkaiser: as it should
<hkaiser>
use phylanx::execution_tree::is_valid to check instead
<aserio>
I have made the condition false
<hkaiser>
nod, right
<aserio>
I am just confused about how this should be handled
<hkaiser>
use phylanx::execution_tree::is_valid to check instead
<hkaiser>
if this returns false you're golden
<diehlpk>
hkaiser, Michael was here today and he will visit you tomorrow
diehlpk has quit [Remote host closed the connection]
<hkaiser>
aserio: so I was wrong, the 'missing' overload is there, no need for me to add anything
diehlpk_ has joined #ste||ar
diehlpk has quit [Ping timeout: 258 seconds]
diehlpk_ has quit [Remote host closed the connection]
diehlpk_ has joined #ste||ar
diehlpk_ has quit [Remote host closed the connection]
diehlpk_ has joined #ste||ar
diehlpk__ has joined #ste||ar
aserio has quit [Quit: aserio]
diehlpk_ has quit [Ping timeout: 248 seconds]
wash has joined #ste||ar
<wash>
hkaiser: ping. I seem to remember either an HPX or a boost document from a few years back, describing what a minimal test case is, and what info people should include in a bug report