aserio changed the topic of #ste||ar to: STE||AR: Systems Technology, Emergent Parallelism, and Algorithm Research | stellar.cct.lsu.edu | HPX: A cure for performance impaired parallel applications | github.com/STEllAR-GROUP/hpx | Buildbot: http://rostam.cct.lsu.edu/ | Log: http://irclog.cct.lsu.edu/
bikineev has joined #ste||ar
vamatya has joined #ste||ar
bikineev has quit [Ping timeout: 240 seconds]
vamatya has quit [Ping timeout: 240 seconds]
parsa has quit [Quit: Zzzzzzzzzzzz]
parsa has joined #ste||ar
parsa has quit [Quit: Zzzzzzzzzzzz]
EverYoun_ has quit [Ping timeout: 264 seconds]
mcopik has quit [Ping timeout: 248 seconds]
K-ballo has quit [Quit: K-ballo]
parsa has joined #ste||ar
bikineev has joined #ste||ar
bikineev has quit [Ping timeout: 248 seconds]
vamatya has joined #ste||ar
parsa has quit [Quit: Zzzzzzzzzzzz]
vamatya has quit [Read error: Connection reset by peer]
vamatya has joined #ste||ar
hkaiser has quit [Quit: bye]
parsa has joined #ste||ar
parsa has quit [Client Quit]
parsa has joined #ste||ar
parsa has quit [Quit: Zzzzzzzzzzzz]
parsa has joined #ste||ar
parsa has quit [Quit: Zzzzzzzzzzzz]
bikineev has joined #ste||ar
bikineev has quit [Ping timeout: 260 seconds]
bikineev has joined #ste||ar
Matombo has joined #ste||ar
Matombo has quit [Remote host closed the connection]
david_pfander has joined #ste||ar
bikineev has quit [Remote host closed the connection]
bikineev has joined #ste||ar
vamatya has quit [Ping timeout: 248 seconds]
Matombo has joined #ste||ar
Matombo has quit [Ping timeout: 255 seconds]
Matombo has joined #ste||ar
bikineev has quit [Remote host closed the connection]
<K-ballo>
denis_blank: we've been careful with our sfinae to only use forms that are tolerated by gcc and msvc failures, I plan to do the same for pack traversal
<heller>
zbyerly: looks like a regression in broadcast. Mind putting together a small reproducable testcase showing that error?
<heller>
and file a ticket?
<zbyerly>
heller, as far as I know this doesn't affect the hpxdataflowsimulator which is the one I'm using. I can try to do that later but it would probably take me much longer than someone else due to my inability to write c++
<heller>
zbyerly: so you are not directly affected by that error?
<zbyerly>
heller, I can definitely file a ticket
<heller>
zbyerly: that would be great
<zbyerly>
heller, I just checked out an older commit of HPX and i'm moving forward
<zbyerly>
heller, I'll file a ticket now, though
<K-ballo>
denis_blank: it appears to only be guarding is_effective instantiations, is that so?
<denis_blank>
K-ballo yes thats correct
<denis_blank>
the traversal will work without full expression sfinae support but it implement effective skipping of completely untouched elements
<denis_blank>
instead of evaluating a predicate for nested objects we sfinae out if there is no accepted element in the nesting by the mapper
aserio has quit [Ping timeout: 240 seconds]
EverYoung has quit [Remote host closed the connection]
EverYoung has joined #ste||ar
bikineev has quit [Ping timeout: 240 seconds]
pree has quit [Quit: AaBbCc]
pree has joined #ste||ar
vamatya has joined #ste||ar
<denis_blank>
K-ballo: One alternative would be to explicitly evaluate whether there are nested elements which need to be traversed and do conditional (tag) dispatching.
<K-ballo>
denis_blank: I'm still trying to familiarize myself with the code, is that how sfinae-complete is being used in pack traversal? as an alternative to tag dispatching?
david_pfander has quit [Ping timeout: 248 seconds]
aserio has joined #ste||ar
parsa has quit [Quit: Zzzzzzzzzzzz]
parsa has joined #ste||ar
<K-ballo>
test/unit/util/pack_traversal.cpp compiles with the effective thing, I wonder if it still does what it should
<denis_blank>
K-ballo: No its not really an alternative to tag dispatching. When you have a sequence of nested objects, you don't want to traverse objects which aren't accepted by the mapper.
<mcopik>
aserio: yes
<denis_blank>
Actually it shoudln'd build the test cases with complete expression sfinae support enabled on msvc
<K-ballo>
now I need to actually test that it works
<K-ballo>
(I just moved the remap sfinae checks from defaulted nttp arguments to defaulted function arguments)
<denis_blank>
K-ballo this is an amazing solution
EverYoung has quit [Ping timeout: 246 seconds]
hkaiser_ has quit [Read error: Connection reset by peer]
aserio has quit [Read error: Connection reset by peer]
aserio has joined #ste||ar
hkaiser has joined #ste||ar
akheir has joined #ste||ar
jaafar has joined #ste||ar
<K-ballo>
denis_blank: sorry, had to leave, I'll continue tomorrow
<K-ballo>
is circle-ci linux or macosx?
rod_t has quit [Ping timeout: 246 seconds]
pree has quit [Quit: AaBbCc]
<aserio>
diehlpk_work: yt?
aserio has quit [Read error: Connection reset by peer]
aserio has joined #ste||ar
patg[[w]] has joined #ste||ar
patg[[w]] has quit [Client Quit]
parsa has quit [Quit: Zzzzzzzzzzzz]
hkaiser has quit [Quit: bye]
<heller>
K-ballo: Linux
parsa has joined #ste||ar
<diehlpk_work>
aserio, yes
denis_blank has quit [Quit: denis_blank]
<github>
[hpx] K-ballo opened pull request #2886: Replace Boost.Random with C++11 <random> (master...std-random) https://git.io/v5Vgx
akheir has quit [Remote host closed the connection]
EverYoung has joined #ste||ar
jaafar has quit [Ping timeout: 252 seconds]
hkaiser has joined #ste||ar
eschnett has quit [Quit: eschnett]
<hkaiser>
zbyerly: yt?
<github>
[hpx] hkaiser created fixing_2885 (+1 new commit): https://git.io/v5V6B
<github>
hpx/fixing_2885 fae416b Hartmut Kaiser: Adapt broadcast() to non-unwrapping async<Action>...
<github>
[hpx] hkaiser opened pull request #2887: Adapt broadcast() to non-unwrapping async<Action> (master...fixing_2885) https://git.io/v5V6a
<jbjnr>
heller: what I was interested in is : what bugs you found. I'm very interested in this because I'm working on new schedulers etc and want to know eveything about the existing ones - especially bugs tyou located in case I missed them.
<jbjnr>
diehlpk_work: yes, but not this week. Sorry, away from the office etc. But you shoukd not stress yourself in writing a duff one for a dodgy sc workshop too much.
<heller>
jbjnr: really just minor oversights
<jbjnr>
list please :)
<diehlpk_work>
jbjnr, I see it more as advertisment for HPX
<heller>
used_processing_units_ not updated correctly. The scheduling loop not shutting down correctly. Threads being scheduled to unassigned PUs
<heller>
Removal and addition of pus from a pool not being thread safe
<heller>
That should be it
<heller>
jbjnr: ^^
<jbjnr>
ok, thanks.
<heller>
Not pushed yet because I haven't solved the thread safety issue yet
<jbjnr>
nothing that I need to worry about directly. I was not aware of "Removal and addition of pus from a pool not being thread safe" , but I only use that on start and in my scheduler, I have a lock on the initial assign.
<heller>
But other than that, my use case seems to be supported just fine
<jbjnr>
cos I had something similar problem wise
<jbjnr>
diehlpk_work: ok
<heller>
Yes, I need to dynamically adapt the number of PUs
<diehlpk_work>
They asked for opensource software for scalability on multiplatforms
bikineev has joined #ste||ar
<diehlpk_work>
So I think HPX should be prominent in this workshop
<heller>
We implemented automatic frequency scaling which is showing a 12% energy decrease for one of our tests with no decrease in runtime
<hkaiser>
heller: that means that the algorithm was bad to begin with
<heller>
No.
<heller>
It is memory bound
<hkaiser>
ok, then it has nothing to do with scaling down for power consumption
<heller>
Hi?
<heller>
Hu?
<hkaiser>
if you're memory bound you are running your code on too many resources to begin with
<heller>
If you have a memory bound code, you can scale down the frequency of your cores without affecting the overall performance
<heller>
Or use less resources, sure
<heller>
In the end, you safe energy with both approaches
<jbjnr>
had a very intersting chat with the SWIFT developers today. they have scheduling use cases that we don't handle. I will try to work on their stuff soon!
<hkaiser>
right, so it's exactly as I said - the algorithm was bad to begin with for the amount of resources you used
<jbjnr>
lol
<heller>
hkaiser: sure. You can do both. Reduce the number cores and scale down the frequency at the same time without sacrificing performance
<hkaiser>
heller: IOW, if your algorithm can utilize all the resources you give it it will not benefit from throttling
<heller>
Reducing the number of cores is currently broken
<hkaiser>
if it can't utilize the resources then throttling gives you the wrong impression
<hkaiser>
in my book throttling is nonsense, except if you need to safe energy and you don't care if you may run slower - everything else is a scam
<heller>
Well, you throttle the number of cores and frequency. this will give you the best energy footprint
<heller>
I'm with you
<heller>
Sadly, someone proposed to have a solution for that energy saving problem
<heller>
So I have to work with that guy to deliver ;)
<heller>
When the monitoring overhead alone is at ~10%...
<heller>
At the end what matters is the raw throughput at full throttle
bikineev has quit [Remote host closed the connection]
<hkaiser>
right
<hkaiser>
at least as long as the centers don't charge for energy but for cpu-hours
aserio has quit [Quit: aserio]
<heller>
Which they probably never will
<github>
[hpx] hkaiser force-pushed fixing_2885 from fae416b to 7eac243: https://git.io/v5V1P
<github>
hpx/fixing_2885 7eac243 Hartmut Kaiser: Adapt broadcast() to non-unwrapping async<Action>...
david_pfander has joined #ste||ar
<github>
[hpx] hkaiser created fixing_2881 (+1 new commit): https://git.io/v5VM8
<github>
hpx/fixing_2881 942580c Hartmut Kaiser: Fix compilation problems if HPX_WITH_ITT_NOTIFY=ON...
<github>
[hpx] hkaiser opened pull request #2888: Fix compilation problems if HPX_WITH_ITT_NOTIFY=ON (master...fixing_2881) https://git.io/v5VMu
<github>
[hpx] K-ballo opened pull request #2889: Add check for libatomic (master...libatomic-check) https://git.io/v5VMw
david_pfander has quit [Ping timeout: 240 seconds]
EverYoung has quit [Remote host closed the connection]
zbyerly_ has joined #ste||ar
vamatya has quit [Ping timeout: 248 seconds]
parsa has quit [Quit: Zzzzzzzzzzzz]
EverYoung has joined #ste||ar
EverYoung has quit [Remote host closed the connection]