hkaiser changed the topic of #ste||ar to: STE||AR: Systems Technology, Emergent Parallelism, and Algorithm Research | stellar.cct.lsu.edu | HPX: A cure for performance impaired parallel applications | github.com/STEllAR-GROUP/hpx | Buildbot: http://rostam.cct.lsu.edu/ | Log: http://irclog.cct.lsu.edu/ | GSoD: https://developers.google.com/season-of-docs/
K-ballo has quit [Quit: K-ballo]
Coldblackice_ has joined #ste||ar
Coldblackice has quit [Ping timeout: 252 seconds]
Coldblackice_ is now known as Coldblackice
nikunj has quit [Remote host closed the connection]
nikunj has joined #ste||ar
nikunj has quit [Remote host closed the connection]
nikunj has joined #ste||ar
Coldblackice_ has joined #ste||ar
Coldblackice has quit [Ping timeout: 265 seconds]
K-ballo has joined #ste||ar
Coldblackice has joined #ste||ar
Coldblackice_ has quit [Read error: Connection reset by peer]
Coldblackice_ has joined #ste||ar
Coldblackice has quit [Ping timeout: 240 seconds]
<heller> Test
<heller> simbergm: I don't like #pragma once ;)
<simbergm> heller: why? :P
<heller> it just feels dirty
<heller> and yeah, it's probably not worth the effort
<simbergm> feels dirty :P
<simbergm> I already have a branch with the replacements, it can be automated you know... ;)
<heller> I am totally impartial there
<heller> I can see the benefits...
<K-ballo> what are the benefits?
<K-ballo> considering the weak points of #pragma once are duplicated files and network filesystems, and we do both a lot
<heller> the benefit is that we get rid of inconsistent include guards
<heller> just a style issue, I guess
<heller> is the duplicated files still a problem?
<K-ballo> sure, duplicated files are different #pragma once wise
<K-ballo> inconsistent include guards as in.. people are coming up with novel ways to name them?
<heller> yup
<heller> not really an issue per se...
<K-ballo> we can solve that with a big heavy stick
Coldblackice_ has quit [Ping timeout: 265 seconds]
Coldblackice has joined #ste||ar
<simbergm> K-ballo: do we or can we actually hit those corner cases in hpx? duplicated files we don't have for sure, what's the problem exactly with network file systems?
<simbergm> I guess your heavy stick would be an inspect check?
<K-ballo> sure we can hit them, we can hit them without #pragma once too.. every so often someone comes up with an inexplicable error caused by mixing includes from system-wide installed hpx and locally built master, for example
<K-ballo> inspect check.. I guess if physical punishment is out of the question, then sure
<K-ballo> now we should also keep in mind each build has its own copy of headers, right?
<K-ballo> were modules copying files? I don't remember
<simbergm> no, generated files would of course be duplicated for each build directory but the main files are not
<simbergm> so that's not really an issue with pragma once then
<simbergm> anyway, I feel like there's too much resistance already...
hkaiser has joined #ste||ar
hkaiser has quit [Quit: bye]
<simbergm> hkaiser: thanks for the email from the cmake guy, it's like magic!
<jaafar> Hey HPX team... I have a dataflow question
<jaafar> Is anything done inside HPX when you have a big dataflow graph, to schedule it for maximum parallelism?
<jaafar> Or does it just grab anything whose dependencies are met and start there...
<jaafar> because it appears to be the latter :)
<jaafar> which means careful control over the ordering of tasks is necessary
<simbergm> jaafar: the latter... since tasks are submitted right away there's not much we can do to choose the schedule, plus one would need some sort of cost estimate for each task
<simbergm> you might want to play around with the different schedulers and queues we have (if you haven't already)
<simbergm> adding tasks to the schedulers fifo or lifo might make a difference
<jaafar> simbergm: can you give me an example of how to do that :)
<simbergm> sure :)
<jaafar> I mean, assuming you don't simply mean "construct dataflows in the order you want them executed, or the reverse order"
<jaafar> which is something I'm experimenting with... largely unsuccessfully but it does seem to have an impact
<simbergm> no, I mean how the scheduler handles new tasks
<simbergm> whether they go to the top or bottom
<jaafar> oh good, there's a hook for that
<jaafar> love it
<simbergm> so the easiest way is --hpx:queuing=X where X can be one of a few predefined schedulers
<simbergm> looking for a list of valid options
<simbergm> there's also "shared-priority" which is not on the list
<simbergm> it's jbjnr's creation, he can tell you more about it
<simbergm> the other way requires recompiling hpx, but you can choose what type of queue to use in different places
<jaafar> oh super, thank you
<simbergm> that's where the type of a scheduler is set, most schedulers take three queue types as template parameters, the first is for "pending" tasks (i.e. ready tasks), the second for "staged" tasks (ready to run, but not full tasks yet), and "terminated" tasks (which is obvious)
<jaafar> good stuff
<simbergm> jbjnr has a bunch of new changes for his scheduler that aren't yet on master, but if you ask him (he's offline now apparently) he might have a branch for you to test
<jaafar> Right now I'm still trying to figure out whether the dependencies are expressed in a way where a scheduler can actually do a good job or not :)
<jaafar> Although here's one thing - I'm struggling to get "high priority" working
<jaafar> none of the queueing options appear to make HPX actually select tasks I've marked high priority over those I haven't
<jaafar> i.e. async(thread_priority_high) and async seem to have equal priority
<simbergm> jaafar: also in progress on jbjnr's scheduler ;)
<simbergm> basically a worker thread processes tasks in this order: local high priority pending, local normal priority pending, local high priority staged, local normal priority staged, and only then does it try to steal high priority tasks from other threads, and last normal priority threads from other threads
<simbergm> this is slightly backwards in terms of priorities, but there's a tradeoff between prioritizing high priority tasks and processing local tasks
<simbergm> many applications will also have no high priority tasks so trying to process all high priority tasks first will slow those applications down...
<simbergm> some of these knobs will be tunable at runtime after 4104 is merged (it's up to schedulers to respect them though)