aserio changed the topic of #ste||ar to: STE||AR: Systems Technology, Emergent Parallelism, and Algorithm Research | stellar.cct.lsu.edu | HPX: A cure for performance impaired parallel applications | github.com/STEllAR-GROUP/hpx | Buildbot: http://rostam.cct.lsu.edu/ | Log: http://irclog.cct.lsu.edu/
bikineev has joined #ste||ar
vamatya has joined #ste||ar
bikineev has quit [Ping timeout: 240 seconds]
vamatya has quit [Ping timeout: 240 seconds]
parsa has quit [Quit: Zzzzzzzzzzzz]
parsa has joined #ste||ar
parsa has quit [Quit: Zzzzzzzzzzzz]
EverYoun_ has quit [Ping timeout: 264 seconds]
mcopik has quit [Ping timeout: 248 seconds]
K-ballo has quit [Quit: K-ballo]
parsa has joined #ste||ar
bikineev has joined #ste||ar
bikineev has quit [Ping timeout: 248 seconds]
vamatya has joined #ste||ar
parsa has quit [Quit: Zzzzzzzzzzzz]
vamatya has quit [Read error: Connection reset by peer]
vamatya has joined #ste||ar
hkaiser has quit [Quit: bye]
parsa has joined #ste||ar
parsa has quit [Client Quit]
parsa has joined #ste||ar
parsa has quit [Quit: Zzzzzzzzzzzz]
parsa has joined #ste||ar
parsa has quit [Quit: Zzzzzzzzzzzz]
bikineev has joined #ste||ar
bikineev has quit [Ping timeout: 260 seconds]
bikineev has joined #ste||ar
Matombo has joined #ste||ar
Matombo has quit [Remote host closed the connection]
david_pfander has joined #ste||ar
bikineev has quit [Remote host closed the connection]
bikineev has joined #ste||ar
vamatya has quit [Ping timeout: 248 seconds]
Matombo has joined #ste||ar
Matombo has quit [Ping timeout: 255 seconds]
Matombo has joined #ste||ar
bikineev has quit [Remote host closed the connection]
Matombo has quit [Ping timeout: 240 seconds]
<github> [hpx] sithhell deleted fix_cuda at 3a72e69: https://git.io/v52Fe
<github> [hpx] sithhell deleted ucx_pp at 00c495d: https://git.io/v52FL
<github> [hpx] sithhell deleted gid_target at c7d7238: https://git.io/v52Fq
Matombo has joined #ste||ar
quaz0r has quit [Ping timeout: 240 seconds]
quaz0r has joined #ste||ar
bikineev has joined #ste||ar
bikineev has quit [Ping timeout: 240 seconds]
hkaiser has joined #ste||ar
K-ballo has joined #ste||ar
mcopik has joined #ste||ar
Matombo has quit [Ping timeout: 240 seconds]
denis_blank has joined #ste||ar
david_pfander has quit [Remote host closed the connection]
pree has joined #ste||ar
taeguk has joined #ste||ar
<taeguk> hkaiser: jbjnr: GSoC is finished. Really thanks for everyone who helped me :)
<hkaiser> taeguk: most welcome!
<hkaiser> taeguk: I hope you'll have some time to continue working on this
<taeguk> hkaiser: Of course!
<hkaiser> :-)
<hkaiser> great!
<taeguk> I will keep going to work on HPX :)
<hkaiser> thanks! this is really appreciated
david_pfander has joined #ste||ar
<github> [hpx] hkaiser closed pull request #2874: Changed serialization of boost.variant to use variadic templates (master...serialize_boost_variant) https://git.io/v5W75
aserio has joined #ste||ar
<github> [hpx] hkaiser pushed 1 new commit to inspect_assert: https://git.io/v5aRN
<github> hpx/inspect_assert f036fe6 Hartmut Kaiser: Merge branch 'master' into inspect_assert
<heller> hkaiser: getting the dynamic throttling to work is quite a bit of work now ;)
<heller> getting there ... first testcase is passing
<hkaiser> heller: just move the existing code into user-land
<heller> it's not that easy
<hkaiser> what's difficult?
<heller> I want to use the remove/add processing unit functionality of the thread pools
<heller> which in theory is easy, of course
<hkaiser> but then the diffcult part is new functionality
<hkaiser> not just moving it out of hpx
<heller> if wouldn't be buggy ;)
<heller> fixing it right now
<hkaiser> what's buggy?
<hkaiser> the scheduler?
<heller> I found bug in the thread pool implementation, scheduling loop and the scheduler so far, yes
<hkaiser> nice
<hkaiser> heller: so all of the difficulty don't come from the fact that you want to move the throttle scheduler out of hpx
<heller> no ;)
<hkaiser> things wouldn't have worked either way
<heller> they obviously did work before the RP merge :P
<hkaiser> shrug, that's what I said - it shouldn't be hard
<hkaiser> heller: write tests
<hkaiser> don't just fix it
<heller> that's what I am doing
<heller> I was just wanting to stress that "Just use the new functionality, everything is there and it should be easy" is only half of the story
<heller> but yes. In the end, the code will be better
<hkaiser> heller : yo're my hero
<hkaiser> you're even
zbyerly_ has joined #ste||ar
hkaiser has quit [Quit: bye]
zbyerly_ has quit [Ping timeout: 252 seconds]
bikineev has joined #ste||ar
eschnett has joined #ste||ar
hkaiser has joined #ste||ar
diehlpk_work has joined #ste||ar
hkaiser has quit [Read error: Connection reset by peer]
bikineev has quit [Ping timeout: 240 seconds]
eschnett has quit [Quit: eschnett]
Matombo has joined #ste||ar
taeguk has quit [Quit: Page closed]
bikineev has joined #ste||ar
eschnett has joined #ste||ar
hkaiser has joined #ste||ar
zbyerly_ has joined #ste||ar
Matombo has quit [Ping timeout: 240 seconds]
<github> [hpx] K-ballo force-pushed std-random from 5931a24 to ed55481: https://git.io/v5gMI
<github> hpx/std-random b47ffc5 Agustin K-ballo Berge: Replace Boost.Random with C++11 <random>
<github> hpx/std-random ed55481 Agustin K-ballo Berge: Add inspect check for deprecated Boost.Random
aserio has quit [Ping timeout: 260 seconds]
aserio has joined #ste||ar
<zbyerly> getting this error with libgeodecomp (https://github.com/zbyerly/libgeodecomp):
<aserio> wash, wash[m]: Will you be joining us today?
hkaiser has quit [Read error: Connection reset by peer]
Matombo has joined #ste||ar
<zao> So you're trying to resolve a future<vector<future<vector<double>>>> into a vector<vector<double>>?
<zao> Has HPX done such recursive unwrapping in the past?
<zao> (got no idea about the parts involved)
hkaiser has joined #ste||ar
pree has quit [Read error: Connection reset by peer]
<zbyerly_> zao, this is something that worked ~3 weeks ago
<zao> Nifty.
<zbyerly_> zao, maybe it's related to the unwrapped -> unwrap thing
pree has joined #ste||ar
<denis_blank> zao: unwrap is capable of unwrapping this -> unwrap_all(future<vector<future<vector<double>>>>{}) (or unwrap_n<2>(...))
zbyerly_ has quit [Remote host closed the connection]
<zao> Not much help from me but "maybe squint at the commits between then and now"
<zao> Which you seem to have done.
zbyerly_ has joined #ste||ar
hkaiser_ has joined #ste||ar
aserio has quit [Ping timeout: 255 seconds]
<jbjnr> heller: could you tell me what bugs you found in scheduler, scheduling loop etc. thanks
jbjnr has quit [Remote host closed the connection]
zbyerly_ has quit [Ping timeout: 246 seconds]
hkaiser has quit [Ping timeout: 240 seconds]
pree has quit [Read error: Connection reset by peer]
aserio has joined #ste||ar
jbjnr has joined #ste||ar
<jbjnr> whoops. not sure if my last message went through to heller
<diehlpk_work> jbjnr, Still interested in the OpenSuCo paper?
pree has joined #ste||ar
pree has quit [Ping timeout: 240 seconds]
<aserio> hkaiser_: can I claim this "The library is fully compliant with the C++17 Standard" on the poster
parsa has joined #ste||ar
<K-ballo> you can
<aserio> K-ballo: thanks!
<K-ballo> hah, I was... ehm, ok
<heller> jbjnr: I'll push later
<aserio> :D
<heller> jbjnr: needed your latest patches as well
<K-ballo> such a vague claim
<aserio> I think posters have a little wiggle room
<aserio> If we went into the specifics no one (normal) could understand it
<aserio> The point is, if you know C++17 you will feel at home
<aserio> mcopik: yt?
<aserio> heller: see pm
pree has joined #ste||ar
pree_ has joined #ste||ar
pree has quit [Read error: Connection reset by peer]
pree_ is now known as pree
<K-ballo> denis_blank: yt?
Matombo has quit [Remote host closed the connection]
bikineev has quit [Read error: No route to host]
bikineev has joined #ste||ar
<github> [hpx] K-ballo created libatomic-check (+1 new commit): https://git.io/v5VIG
<github> hpx/libatomic-check eaad4fc Agustin K-ballo Berge: Add check for libatomic
<denis_blank> K-ballo: yes
<heller> zbyerly: what's the error?
<diehlpk_work> hkaiser_, thundergroudon[m gets this warning on Windows: warning: LF will be replaced by CRLF
EverYoung has joined #ste||ar
<K-ballo> denis_blank: hey, tell me more about SFINAE_EXPRESSION_COMPLETE, presumably it fails on msvc?
<diehlpk_work> Any idea why? I can not help him because I do not have Windows
<K-ballo> now that gsoc is over I intend to nuke that config macro
<heller> diehlpk_work: wrong editor settings windows vs unix linebreaks
<heller> zbyerly: ok, can you point me to the code causing it?
<diehlpk_work> No he promised to use unix linebreaks
<denis_blank> K-ballo it's because MSVC doesn't provide full expression sfinae capability
<K-ballo> denis_blank: we've been careful with our sfinae to only use forms that are tolerated by gcc and msvc failures, I plan to do the same for pack traversal
<heller> zbyerly: looks like a regression in broadcast. Mind putting together a small reproducable testcase showing that error?
<heller> and file a ticket?
<zbyerly> heller, as far as I know this doesn't affect the hpxdataflowsimulator which is the one I'm using. I can try to do that later but it would probably take me much longer than someone else due to my inability to write c++
<heller> zbyerly: so you are not directly affected by that error?
<zbyerly> heller, I can definitely file a ticket
<heller> zbyerly: that would be great
<zbyerly> heller, I just checked out an older commit of HPX and i'm moving forward
<zbyerly> heller, I'll file a ticket now, though
<K-ballo> denis_blank: it appears to only be guarding is_effective instantiations, is that so?
<denis_blank> K-ballo yes thats correct
<denis_blank> the traversal will work without full expression sfinae support but it implement effective skipping of completely untouched elements
<denis_blank> instead of evaluating a predicate for nested objects we sfinae out if there is no accepted element in the nesting by the mapper
aserio has quit [Ping timeout: 240 seconds]
EverYoung has quit [Remote host closed the connection]
EverYoung has joined #ste||ar
bikineev has quit [Ping timeout: 240 seconds]
pree has quit [Quit: AaBbCc]
pree has joined #ste||ar
vamatya has joined #ste||ar
<denis_blank> K-ballo: One alternative would be to explicitly evaluate whether there are nested elements which need to be traversed and do conditional (tag) dispatching.
<K-ballo> denis_blank: I'm still trying to familiarize myself with the code, is that how sfinae-complete is being used in pack traversal? as an alternative to tag dispatching?
<K-ballo> denis_blank: I've reduced a test to this https://gist.github.com/K-ballo/0eab4545d1ce8f763bd5a270b4fa57ef , the assertion fires for msvc if sfinae-complete
david_pfander has quit [Ping timeout: 248 seconds]
aserio has joined #ste||ar
parsa has quit [Quit: Zzzzzzzzzzzz]
parsa has joined #ste||ar
<K-ballo> test/unit/util/pack_traversal.cpp compiles with the effective thing, I wonder if it still does what it should
<denis_blank> K-ballo: No its not really an alternative to tag dispatching. When you have a sequence of nested objects, you don't want to traverse objects which aren't accepted by the mapper.
<mcopik> aserio: yes
<denis_blank> Actually it shoudln'd build the test cases with complete expression sfinae support enabled on msvc
<aserio> see pm
<K-ballo> it didn't, I "fixed" it
<denis_blank> how?
<K-ballo> now I need to actually test that it works
<K-ballo> (I just moved the remap sfinae checks from defaulted nttp arguments to defaulted function arguments)
<denis_blank> K-ballo this is an amazing solution
EverYoung has quit [Ping timeout: 246 seconds]
hkaiser_ has quit [Read error: Connection reset by peer]
aserio has quit [Read error: Connection reset by peer]
aserio has joined #ste||ar
hkaiser has joined #ste||ar
akheir has joined #ste||ar
jaafar has joined #ste||ar
<K-ballo> denis_blank: sorry, had to leave, I'll continue tomorrow
<K-ballo> is circle-ci linux or macosx?
rod_t has quit [Ping timeout: 246 seconds]
pree has quit [Quit: AaBbCc]
<aserio> diehlpk_work: yt?
aserio has quit [Read error: Connection reset by peer]
aserio has joined #ste||ar
patg[[w]] has joined #ste||ar
patg[[w]] has quit [Client Quit]
parsa has quit [Quit: Zzzzzzzzzzzz]
hkaiser has quit [Quit: bye]
<heller> K-ballo: Linux
parsa has joined #ste||ar
<diehlpk_work> aserio, yes
denis_blank has quit [Quit: denis_blank]
<github> [hpx] K-ballo opened pull request #2886: Replace Boost.Random with C++11 <random> (master...std-random) https://git.io/v5Vgx
akheir has quit [Remote host closed the connection]
EverYoung has joined #ste||ar
jaafar has quit [Ping timeout: 252 seconds]
hkaiser has joined #ste||ar
eschnett has quit [Quit: eschnett]
<hkaiser> zbyerly: yt?
<github> [hpx] hkaiser created fixing_2885 (+1 new commit): https://git.io/v5V6B
<github> hpx/fixing_2885 fae416b Hartmut Kaiser: Adapt broadcast() to non-unwrapping async<Action>...
<github> [hpx] hkaiser opened pull request #2887: Adapt broadcast() to non-unwrapping async<Action> (master...fixing_2885) https://git.io/v5V6a
<jbjnr> heller: what I was interested in is : what bugs you found. I'm very interested in this because I'm working on new schedulers etc and want to know eveything about the existing ones - especially bugs tyou located in case I missed them.
<jbjnr> diehlpk_work: yes, but not this week. Sorry, away from the office etc. But you shoukd not stress yourself in writing a duff one for a dodgy sc workshop too much.
<heller> jbjnr: really just minor oversights
<jbjnr> list please :)
<diehlpk_work> jbjnr, I see it more as advertisment for HPX
<heller> used_processing_units_ not updated correctly. The scheduling loop not shutting down correctly. Threads being scheduled to unassigned PUs
<heller> Removal and addition of pus from a pool not being thread safe
<heller> That should be it
<heller> jbjnr: ^^
<jbjnr> ok, thanks.
<heller> Not pushed yet because I haven't solved the thread safety issue yet
<jbjnr> nothing that I need to worry about directly. I was not aware of "Removal and addition of pus from a pool not being thread safe" , but I only use that on start and in my scheduler, I have a lock on the initial assign.
<heller> But other than that, my use case seems to be supported just fine
<jbjnr> cos I had something similar problem wise
<jbjnr> diehlpk_work: ok
<heller> Yes, I need to dynamically adapt the number of PUs
<diehlpk_work> They asked for opensource software for scalability on multiplatforms
bikineev has joined #ste||ar
<diehlpk_work> So I think HPX should be prominent in this workshop
<heller> We implemented automatic frequency scaling which is showing a 12% energy decrease for one of our tests with no decrease in runtime
<hkaiser> heller: that means that the algorithm was bad to begin with
<heller> No.
<heller> It is memory bound
<hkaiser> ok, then it has nothing to do with scaling down for power consumption
<heller> Hi?
<heller> Hu?
<hkaiser> if you're memory bound you are running your code on too many resources to begin with
<heller> If you have a memory bound code, you can scale down the frequency of your cores without affecting the overall performance
<heller> Or use less resources, sure
<heller> In the end, you safe energy with both approaches
<jbjnr> had a very intersting chat with the SWIFT developers today. they have scheduling use cases that we don't handle. I will try to work on their stuff soon!
<hkaiser> right, so it's exactly as I said - the algorithm was bad to begin with for the amount of resources you used
<jbjnr> lol
<heller> hkaiser: sure. You can do both. Reduce the number cores and scale down the frequency at the same time without sacrificing performance
<hkaiser> heller: IOW, if your algorithm can utilize all the resources you give it it will not benefit from throttling
<heller> Reducing the number of cores is currently broken
<hkaiser> if it can't utilize the resources then throttling gives you the wrong impression
<hkaiser> in my book throttling is nonsense, except if you need to safe energy and you don't care if you may run slower - everything else is a scam
<heller> Well, you throttle the number of cores and frequency. this will give you the best energy footprint
<heller> I'm with you
<heller> Sadly, someone proposed to have a solution for that energy saving problem
<heller> So I have to work with that guy to deliver ;)
<heller> When the monitoring overhead alone is at ~10%...
<heller> At the end what matters is the raw throughput at full throttle
bikineev has quit [Remote host closed the connection]
<hkaiser> right
<hkaiser> at least as long as the centers don't charge for energy but for cpu-hours
aserio has quit [Quit: aserio]
<heller> Which they probably never will
<github> [hpx] hkaiser force-pushed fixing_2885 from fae416b to 7eac243: https://git.io/v5V1P
<github> hpx/fixing_2885 7eac243 Hartmut Kaiser: Adapt broadcast() to non-unwrapping async<Action>...
david_pfander has joined #ste||ar
<github> [hpx] hkaiser created fixing_2881 (+1 new commit): https://git.io/v5VM8
<github> hpx/fixing_2881 942580c Hartmut Kaiser: Fix compilation problems if HPX_WITH_ITT_NOTIFY=ON...
<github> [hpx] hkaiser opened pull request #2888: Fix compilation problems if HPX_WITH_ITT_NOTIFY=ON (master...fixing_2881) https://git.io/v5VMu
<github> [hpx] K-ballo opened pull request #2889: Add check for libatomic (master...libatomic-check) https://git.io/v5VMw
david_pfander has quit [Ping timeout: 240 seconds]
EverYoung has quit [Remote host closed the connection]
zbyerly_ has joined #ste||ar
vamatya has quit [Ping timeout: 248 seconds]
parsa has quit [Quit: Zzzzzzzzzzzz]
EverYoung has joined #ste||ar
EverYoung has quit [Remote host closed the connection]
EverYoung has joined #ste||ar