aserio changed the topic of #ste||ar to: STE||AR: Systems Technology, Emergent Parallelism, and Algorithm Research | stellar.cct.lsu.edu | HPX: A cure for performance impaired parallel applications | github.com/STEllAR-GROUP/hpx | Buildbot: http://rostam.cct.lsu.edu/ | Log: http://irclog.cct.lsu.edu/
Smasher has quit [Remote host closed the connection]
parsa has quit [Quit: Zzzzzzzzzzzz]
eschnett has quit [Quit: eschnett]
eschnett has joined #ste||ar
hkaiser has joined #ste||ar
mcopik has quit [Ping timeout: 276 seconds]
parsa has joined #ste||ar
hkaiser has quit [Quit: bye]
eschnett has quit [Quit: eschnett]
parsa has quit [Quit: Zzzzzzzzzzzz]
eschnett has joined #ste||ar
K-ballo has quit [Quit: K-ballo]
<Zwei>
Slight change of plan, I have an interview on Monday, so preparing for that. However, I am very keen to contribute to HPX, since I really do enjoy high performance computing and parallel programming a lot.
<Zwei>
So even if I get a job, I'll be working on HPX in the evenings and weekends (hopefully)
<Zwei>
< diehlpk_work> Zwei, Which project?
<Zwei>
< diehlpk_work> Zwei, Which project?
<Zwei>
< diehlpk_work> Zwei, Which project?
<Zwei>
HPX I think
<Zwei>
(idk why that pasted 3 times....)
<Zwei>
C++ interview on Monday, yaaay :)
<heller_>
congrats
jbjnr has quit [Ping timeout: 255 seconds]
jaafar has joined #ste||ar
jbjnr has joined #ste||ar
<simbergm>
jbjnr: that leak is on me, still open at #2974
<jbjnr>
simbergm: ok. IthinkI fixed it before by wrapping it in #ifdef for ITTNOTIFY but I forgot about it again
<simbergm>
jbjnr: the leak would still be there with ITT enabled though
<jbjnr>
understood
jaafar has quit [Ping timeout: 252 seconds]
david_pfander has joined #ste||ar
simbergm has quit [Ping timeout: 260 seconds]
simbergm has joined #ste||ar
simbergm has quit [Ping timeout: 248 seconds]
simbergm has joined #ste||ar
hkaiser has joined #ste||ar
hkaiser has quit [Ping timeout: 256 seconds]
K-ballo has joined #ste||ar
<heller_>
ha!
<heller_>
with the Coroutine TS, we can get rid of our custom context switching entirely
<heller_>
very simple proof of concept of doing context switches with the coro ts
<jbjnr>
hkaiser: heller_ I will need to have a skype call with both of you quite soon. Are both of you around next week?
<hkaiser>
yes
<heller_>
jbjnr: yes, except on tuesday
<heller_>
jbjnr: what do you want to talk about?
<hkaiser>
heller_: all of this works only if the scheduled tasks don't suspend more than one level deep into a callchain, i.e. if they leave their stack 'empty' while suspending
<jbjnr>
heller_: I need to gather information for an HPX talk.
<jbjnr>
also heller_ - the HLRS course - do you still want to do it. I don't much.
<heller_>
jbjnr: there's no way out anymore, is there?
<jbjnr>
we can just tell Rolf to cancel it if we don't want to do it.
<heller_>
hkaiser: hmm, seems to work pretty nicely even with a deeper call stack
<heller_>
hkaiser: why do you think it wouldn't work?
<hkaiser>
heller_: ok
<hkaiser>
do your tasks have their own stack?
<heller_>
the question is, if it still works when a task is actually suspended proper instead of just being yielded
<heller_>
only the coroutine frame generated by the compiler
<hkaiser>
ok
<hkaiser>
let's assume task A call 3 functions, the last (lowest one) is a coroutine with a compiler generated coro frame, you suspend it by yielding
<hkaiser>
that means it leaves 3 frames on the scheduler stack
<hkaiser>
now B does the same
<hkaiser>
at that point you ave no way of safely resuming A
<hkaiser>
because the stack has 3 frames from B below the frames needed by A
Vir has quit [Read error: Connection reset by peer]
<heller_>
B would have its own coro frame, wouldn't it?
<hkaiser>
sure
<hkaiser>
but not all functions in the call chain have a coro frame, do they?
<heller_>
I have to check this
<heller_>
no
<hkaiser>
nod
<hkaiser>
where do those store their frame information?
<heller_>
on the regular call stack
<hkaiser>
right
<heller_>
i guess
<hkaiser>
and those frames are left there untouched if you suspend that task
<heller_>
the question really is how all this is implemented underneath
<heller_>
I'd assume it'll work
<hkaiser>
I don't see how, but I might miss something
Vir has joined #ste||ar
<heller_>
IIUC, each coroutine_handle saves the state (that is the call stack)
<heller_>
I guess I have to implement a proper task queue now ;)
<hkaiser>
if you start saving the stack you end up with stackfull tasks
<hkaiser>
alternatively, all functions in the call chain have to have a compiler-generated coro-frame, but even then the actual return addresses will end up on the scheduler stack
<hkaiser>
heller_: the coroutine_handle does not save the whole stack, btw
<heller_>
hmm
<heller_>
hkaiser: you are correct of course
<hkaiser>
k
<heller_>
this would have been too nice :(
<hkaiser>
yah, indeed
<hkaiser>
jbjnr: #3090 tells me there was one error on daint, but if I follow the link it does not tell me what's wrong
<hkaiser>
also, heller_: you fine with #3090 now?
<heller_>
hkaiser: I guess something went wrong when uploading the results
<heller_>
let's wait a little to see if the dashboard updates
<hkaiser>
three weeks ago there was no fixing_yield branch
<hkaiser>
and it says 23 hours ago, which is about right
<hkaiser>
that's what I see on the current dashboard homepage
<hkaiser>
the top of that page says 'No file changed as of Wednesday, January 10 2018 - 00:00 CET', so it is current
<jbjnr>
NB. I forgot to restart pycicle this morning when my machine was shut down by the people checking electric sockets. Just restarted it now. New builds about to begin
<jbjnr>
I no longer care what you two are looking at
<hkaiser>
ok, I give up - how do I see that from the link posted on the ticket?
<hkaiser>
something is really off :/
<hkaiser>
if the links in the tickets point to stuff you don't care about, why do you post them ?
<heller_>
ok, the link you posted shows the error, yes, but the one in the PR is pointing to a bogus link
<jbjnr>
the link posted on the ticket is probably wrong. I have not checked that in a while. just look at the front page of the dashboard. click dashboard, then current. It isn't hard.
<hkaiser>
jbjnr: sure, I can do that, but then please let the tickets point there as well
<jbjnr>
going home now so that I don't have to look at IRC any more
<hkaiser>
or don't post a link alltogether
<hkaiser>
jbjnr: don't be angry - I'm happy you created pycicle
<jbjnr>
I'm not reading this
<jbjnr>
:)
<hkaiser>
lol
<hkaiser>
heller_: see pm, pls
parsa has joined #ste||ar
eschnett has quit [Quit: eschnett]
twwright_ has joined #ste||ar
twwright has quit [Ping timeout: 248 seconds]
twwright_ is now known as twwright
eschnett has joined #ste||ar
parsa has quit [Quit: Zzzzzzzzzzzz]
parsa has joined #ste||ar
diehlpk_work has quit [Quit: Leaving]
parsa has quit [Quit: Zzzzzzzzzzzz]
K-ballo has quit [Quit: K-ballo]
parsa has joined #ste||ar
K-ballo has joined #ste||ar
david_pfander has quit [Ping timeout: 276 seconds]
patg[w] has joined #ste||ar
jaafar has joined #ste||ar
patg[w] has quit [Quit: Leaving]
parsa has quit [Quit: Zzzzzzzzzzzz]
parsa has joined #ste||ar
parsa has quit [Client Quit]
hkaiser has quit [Quit: bye]
vamatya has quit [Read error: Connection reset by peer]
parsa has joined #ste||ar
jaafar has quit [Quit: Konversation terminated!]
jaafar has joined #ste||ar
Smasher has joined #ste||ar
rtohid has quit [Quit: rtohid]
jaafar has quit [Ping timeout: 276 seconds]
quaz0r has quit [Ping timeout: 252 seconds]
jaafar has joined #ste||ar
quaz0r has joined #ste||ar
hkaiser has joined #ste||ar
RostamLog has joined #ste||ar
parsa has quit [Quit: Zzzzzzzzzzzz]
parsa has joined #ste||ar
Smasher has quit [Remote host closed the connection]
Smasher has joined #ste||ar
aserio has joined #ste||ar
Smasher has quit [Remote host closed the connection]