aserio changed the topic of #ste||ar to: STE||AR: Systems Technology, Emergent Parallelism, and Algorithm Research | stellar.cct.lsu.edu | HPX: A cure for performance impaired parallel applications | github.com/STEllAR-GROUP/hpx | Buildbot: http://rostam.cct.lsu.edu/ | Log: http://irclog.cct.lsu.edu/
quaz0r has quit [Ping timeout: 248 seconds]
EverYoun_ has joined #ste||ar
EverYoung has quit [Ping timeout: 250 seconds]
EverYoun_ has quit [Ping timeout: 258 seconds]
EverYoung has joined #ste||ar
quaz0r has joined #ste||ar
EverYoun_ has joined #ste||ar
jakemp has joined #ste||ar
EverYoung has quit [Ping timeout: 255 seconds]
EverYoun_ has quit [Ping timeout: 255 seconds]
eschnett has joined #ste||ar
K-ballo has quit [Quit: K-ballo]
hkaiser has quit [Quit: bye]
parsa has joined #ste||ar
parsa has quit [Quit: Zzzzzzzzzzzz]
parsa has joined #ste||ar
parsa has quit [Quit: Zzzzzzzzzzzz]
parsa has joined #ste||ar
parsa| has joined #ste||ar
parsa has quit [Read error: Connection reset by peer]
jaafar has quit [Ping timeout: 258 seconds]
<github> [hpx] sithhell force-pushed remove_component_factory from 0db4a50 to a7fc77a: https://git.io/vF2gP
<github> hpx/remove_component_factory a7fc77a Thomas Heller: Cleanup up and Fixing component creation and deletion...
<github> [hpx] sithhell pushed 1 new commit to master: https://git.io/vF59h
<github> hpx/master d421371 Thomas Heller: Merge pull request #3015 from STEllAR-GROUP/performance_optimizations...
<github> [hpx] sithhell pushed 1 new commit to master: https://git.io/vF5He
<github> hpx/master e7e1ee2 Thomas Heller: Merge pull request #3016 from STEllAR-GROUP/fixing_stackoverflow_options...
jaafar has joined #ste||ar
<heller> msimberg: so, what's the plan with the throttling now?
jaafar has quit [Ping timeout: 246 seconds]
jaafar_ has joined #ste||ar
parsa| has quit [Quit: Zzzzzzzzzzzz]
jaafar_ has quit [Ping timeout: 248 seconds]
msimberg has quit [Read error: Connection reset by peer]
simbergm has joined #ste||ar
<simbergm> heller: yt?
<heller> simbergm: hey
<simbergm> so the plan for throttling (for me at least) is the same as before discussing with hkaiser yesterday
<simbergm> i.e. a strict and a relaxed stopping modes
<heller> ok
<heller> good
<simbergm> I will try to get it into shape today
<heller> I'll work on something else and let you ponder
<simbergm> ok
<simbergm> currently I've added another state for the relaxed stopping mode
<simbergm> it feels a bit wrong but I also don't want to add any state to the scheduler as this would only be needed while shutting down
<simbergm> heller: do you have an established way of signaling something to the scheduling loop without storing it in the scheduler? sounds almost impossible...
<heller> hmmm
<heller> so you want to signal "suspend"?
<heller> or "stop"?
<simbergm> heller: in this case stop, stop strict and stop relaxed
<simbergm> I've ignored suspend on this branch, but I would signal suspend through this as well
<heller> ok, I think in that case, you won't get around of having more states
<simbergm> I think this would anyway be a short term solution, once suspend is there I would like to have stop be strict only again
<heller> yeah ....
<heller> not sure if we shouldn't go for suspend to begin with
<simbergm> that would be fine with me as well, I have something started on that as well
<simbergm> just wasn't sure how soon you'd like to have the throttle stuff fixed
<simbergm> suspend will take longer, although if I restrict it to only suspending threads (not pools, runtime) then it should be fairly small and self-contained
<simbergm> heller: if throttle is not super urgent I will then go ahead to suspending threads instead, are you ok with that?
<simbergm> which to me means not allowing fully removing pus dynamically, agree?
<jbjnr> simbergm: from the point of view of cscs - we don't care about throttling anyway, so if you have to break that to fix suspend/resume properly ...
<simbergm> well, I don't *need* to break it, but I see suspending as just replacing that functionality
<jbjnr> ok
<simbergm> jbjnr: ok to what? :)
<jbjnr> to what you wrote
<jbjnr> "suspending as just replacing that functionality"
<simbergm> also as throttling didn't really work properly until now I also don't see it as breaking it
<simbergm> is there a use case I'm missing? where you'd actually want to remove pus and not just suspend?
<jbjnr> What I wanted to say, but without upsetting heller too much was that nobody uses throttling except one projhect of hellers, that will be cancelled as soon as the funding ends, so don't spend too much time fixing his use-case, if it prevents the wider one from working :)
<jbjnr> <cough>
<jbjnr> <cough>
* zao hands out cough mints
<jbjnr> thanks
<simbergm> actually I thought heller wrote that comment, it makes more sense coming from you though... :)
<heller> lol
<heller> I can't comment further as the channel is logged ;)
<simbergm> uh oh :)
<simbergm> but heller, you agree more or less?
<heller> yes
<simbergm> heller: side question, should this line be threads::get_self_ptr?
<simbergm> is it to check that you're in the runtime?
david_pfander has joined #ste||ar
<github> [hpx] StellarBot pushed 1 new commit to gh-pages: https://git.io/vFdfK
<github> hpx/gh-pages 8d5f777 StellarBot: Updating docs
<heller> simbergm: it's to check if we are running in a HPX thread
<simbergm> heller: right, that's what I meant ;) get_runtime_ptr returns the runtime pointer no matter where you call it from though, so get_self_ptr seems like the correct thing...
<heller> simbergm: yeah...
ABresting has joined #ste||ar
hkaiser has joined #ste||ar
<heller> hkaiser: I hate it when you are right...
<hkaiser> ;)
<hkaiser> g'morning
<heller> good morning
<heller> the grain size predictions was indeed flawed
<wash> OMG
<wash> the real world of software engineering is painful sometimes
<wash> I spent 6 hours today writing up procedures for comitting code
<heller> wash: good luck
<hkaiser> lol
<wash> lol
<wash> it was worse than that
<wash> because these were procedures for open source stuff
<heller> hkaiser: I am currently looking into finding a good function to fit against for the grain sizes... any ideas?
<wash> so I have to deal with legal things
<wash> anyways
<wash> hkaiser: does hpx have parallel nth_element?
<simbergm> what's the status of this page: https://stellar-group.github.io/hpx/ ? seems to be just an (old) readme
<wash> Billy and I are wondering if that's even possible to implement sanely
<hkaiser> heller: yah, difficult - didn't think of that
<hkaiser> wash: we have only sort() from the algorithms related to ordering
<hkaiser> and is_sorted
<wash> k
<hkaiser> wash: nobody has taken this on yet
<heller> I am getting decent results with a rationale function of degree 2... would be nice to do better to get a better prediction
<hkaiser> heller: some higher order even polynomial perhaps
<heller> no, it gets even worse ;)
<heller> ahh, higher order even polynomial
<heller> ok
<heller> I can try that
<heller> hkaiser: the thing is that we know what *should* happen at grain_size == 0
<hkaiser> grainsize == 0?
<hkaiser> define grainsize, then
<heller> no real work
<heller> grain size is the average time of a task performing real work
<hkaiser> is that a thing at all?
<hkaiser> I meant no real work
<heller> well, it is an extreme point
<heller> the idea is to pass this constraint to the curve fitting to better find the parameter
<hkaiser> ok, do we know that?
<heller> I think so, yes
<hkaiser> we also know the dependency for cores == 1
<heller> dependencies?
<hkaiser> speedup over grainsize for num cores == 1
<heller> true
<heller> good point
<heller> the curves are sufficiently different for the different core counts though
<heller> I am not sure if that information is of any value
<hkaiser> show me
<heller> still a little messy. The grain size is on the x axis, the y axis is tasks/second
<hkaiser> nod
<hkaiser> doesn't look too bad
<heller> the fitted curve is a rational
<heller> nope
<hkaiser> heller: that's a nice result
<heller> hkaiser: yeah ... getting there
<heller> I am not happy with the fitted curves though
<hkaiser> heller: you might want to talk to pat to get access to her data
<heller> yes, that's the plan
<hkaiser> yah - a mathematician might have an idea
<heller> I want to reproduce it
<heller> all in all, this is very nice
<hkaiser> yes
<heller> with this toolkit, we can pretty nicely have a first step towards autotuning
<heller> the idea is to reach the maximal throughput within a few iterations of different parameters
<heller> you can see the curves getting better and better and the more sample points you add
<heller> so for example, if you omit the datapoints for the finest granularity, you loose the saddle point
<heller> but with some analysis on the resulting functions, you can actually steer the bisection into the right direction
jaafar_ has joined #ste||ar
jaafar_ has quit [Ping timeout: 240 seconds]
K-ballo has joined #ste||ar
eschnett has quit [Quit: eschnett]
eschnett has joined #ste||ar
hkaiser has quit [Quit: bye]
hkaiser has joined #ste||ar
hkaiser has quit [Client Quit]
hkaiser has joined #ste||ar
aserio has joined #ste||ar
jbjnr has quit [Quit: Leaving]
jbjnr has joined #ste||ar
jbjnr has quit [Client Quit]
jbjnr has joined #ste||ar
mcopik has joined #ste||ar
<diehlpk_work> hkaiser, heller yah - a mathematician might have an idea - here is one
jbjnr has quit [Client Quit]
jbjnr has joined #ste||ar
<hkaiser> diehlpk_work: the fix for the stackoverflow config mess was merged this morning
<hkaiser> hold on, it was not comprehensive :/
<hkaiser> let me have a look one more time
<diehlpk_work> heller, How can I help you?
jbjnr has quit [Client Quit]
jbjnr has joined #ste||ar
<diehlpk_work> hkaiser, This morning, I will write on the paper and will play around with hpx after lunch
jbjnr has quit [Client Quit]
jbjnr has joined #ste||ar
jbjnr has quit [Quit: Leaving]
jbjnr has joined #ste||ar
jbjnr has quit [Client Quit]
jbjnr has joined #ste||ar
jbjnr has quit [Client Quit]
jbjnr has joined #ste||ar
* jbjnr this is a test
<jbjnr> :(
<zao> jbjnr: beep boop?
<jbjnr> boop beep
<zao> Ah, turned off join/part filters, I see what you mean.
<jbjnr> trying to get smiley's to appear as faces instead of as :)
<jbjnr> doesn't work
<diehlpk_work> jbjnr, Sounds fancy
<diehlpk_work> Why not emojis?
<jbjnr> hexchat on windows doesn't support tham. I will switch irc clients. I used to use a firefox plugin, but it doesn't work with latest firefox
<diehlpk_work> jbjnr, On 8.1 you can use this font Segoi UI Symbol and it should work
<jbjnr> I have Segoi UI Emoji, but it doesn't seem to help. I'll try...
jbjnr has quit [Quit: Leaving]
jbjnr has joined #ste||ar
<jbjnr> another test :)
<jbjnr> balls
<hkaiser> jbjnr: I do see faces ;)
<jbjnr> well done
<jbjnr> :)
<jbjnr> my numa hint executor+scheduler is slower than the old one. I give up!
<hkaiser> jbjnr: nod, I expected that
<jbjnr> rubbish!
<hkaiser> the hwloc memory binding and query functions are very slow - I think I mentioned this
EverYoung has joined #ste||ar
<jbjnr> I don't think that's it, but I will test for that, since I can skip the hwloc lookup for this test and compute the tile/domain from the index
<jbjnr> I see some badness in the task traces that looks like old scheduler problems reappearing
<jbjnr> gtg bbiab
david_pfander has quit [Ping timeout: 258 seconds]
patg has joined #ste||ar
patg is now known as Guest37410
Guest37410 is now known as patg[w]
patg[w] is now known as patg[[w]]
<github> [hpx] hkaiser created pool_elasticity (+1 new commit): https://git.io/vFdhg
<github> hpx/pool_elasticity b12dd16 Hartmut Kaiser: Adding enable_elasticity option to pool configuration...
EverYoung has quit [Remote host closed the connection]
<github> [hpx] hkaiser opened pull request #3019: Adding enable_elasticity option to pool configuration (master...pool_elasticity) https://git.io/vFdhy
EverYoung has joined #ste||ar
EverYoung has quit [Remote host closed the connection]
EverYoung has joined #ste||ar
<K-ballo> how am I supposed to run inspect for a single check, say the assert_macro check?
jakemp has quit [Ping timeout: 268 seconds]
patg[[w]] has quit [Quit: This computer has gone to sleep]
patg[[w]] has joined #ste||ar
mcopik has quit [Ping timeout: 250 seconds]
EverYoun_ has joined #ste||ar
* K-ballo has failed at boost.program_options
<K-ballo> I'll just hardcode the arguments
EverYou__ has joined #ste||ar
EverYoung has quit [Ping timeout: 252 seconds]
EverYoun_ has quit [Ping timeout: 255 seconds]
ABresting has quit [Quit: Connection closed for inactivity]
mcopik has joined #ste||ar
aserio has quit [Ping timeout: 252 seconds]
jakemp has joined #ste||ar
<jakemp> I'm running some EPCC-like benchmarks, which basically create the same number of threads in different ways. For some reason, the HPX version of 1 thread creating all of the threads (with async) is the slowest by a factor of 10-15.
<jakemp> Is there any reason for this, or some way to avoid it?
<hkaiser> jakemp: uhh
<hkaiser> jakemp: how many tasks is that?
<jakemp> 64k
<hkaiser> should just fly
<hkaiser> jakemp: what should I do to reproduce that?
<jakemp> it's standalone, so just that line with that file
<hkaiser> how do you run it?
<jakemp> just ./taskbench
<hkaiser> k
<hkaiser> will try
<jakemp> thanks
<jakemp> the number of tasks scales with the number of threads, so 1024 per thread. I tried it on lower core counts though and got similar results
<jakemp> also, it's the second test
<hkaiser> k
<github> [hpx] hkaiser pushed 1 new commit to pool_elasticity: https://git.io/vFFYX
<github> hpx/pool_elasticity 6f98361 Hartmut Kaiser: Adding missing namespace qualifications
<github> [hpx] hkaiser force-pushed pool_elasticity from 6f98361 to a15c9ff: https://git.io/vFFsk
<github> hpx/pool_elasticity a15c9ff Hartmut Kaiser: Adding enable_elasticity option to pool configuration...
aserio has joined #ste||ar
<github> [hpx] hkaiser created command_line (+1 new commit): https://git.io/vFFs5
<github> hpx/command_line 8ea460e Hartmut Kaiser: Disable command-line aliasing for applications that use user_main...
parsa has joined #ste||ar
parsa has quit [Client Quit]
parsa has joined #ste||ar
parsa has quit [Quit: Zzzzzzzzzzzz]
hkaiser has quit [Quit: bye]
hkaiser has joined #ste||ar
jbjnr_ has joined #ste||ar
<jbjnr_> heller: jbjnr this is a smiley :)
<jbjnr_> grrr.
<jbjnr_> sorry, that was supposd to say hello not heller
<jbjnr_> some dodgy autocomplete on this irc client. not having a lot of luck today with IRC config.
eschnett has quit [Quit: eschnett]
jbjnr_ has quit [Remote host closed the connection]
jbjnr_ has joined #ste||ar
<heller> Why is that so important to you?
<aserio> hkaiser: yt?
jaafar_ has joined #ste||ar
aserio has quit [Quit: aserio]
EverYou__ has quit []
EverYoung has joined #ste||ar
EverYoun_ has joined #ste||ar
EverYoung has quit [Ping timeout: 255 seconds]
EverYoun_ has quit [Remote host closed the connection]
EverYoung has joined #ste||ar
patg[[w]] has quit [Quit: Leaving]
jakemp has quit []
jakemp has joined #ste||ar