aserio changed the topic of #ste||ar to: STE||AR: Systems Technology, Emergent Parallelism, and Algorithm Research | stellar.cct.lsu.edu | HPX: A cure for performance impaired parallel applications | github.com/STEllAR-GROUP/hpx | Buildbot: http://rostam.cct.lsu.edu/ | Log: http://irclog.cct.lsu.edu/
EverYoung has joined #ste||ar
EverYoun_ has quit [Ping timeout: 245 seconds]
EverYoung has quit [Ping timeout: 245 seconds]
<zao>
»You’ll need to update all of your config files to the CircleCI 2.0 syntax in order to migrate your projects to CircleCI 2.0 over the next 6 months.»
<K-ballo>
we are almost ready, we just need to take care of the .. vector component whatever
EverYoung has joined #ste||ar
eschnett has quit [Quit: eschnett]
EverYoung has quit [Remote host closed the connection]
EverYoung has joined #ste||ar
EverYoung has quit [Read error: Connection reset by peer]
EverYoung has joined #ste||ar
parsa has joined #ste||ar
parsa has quit [Client Quit]
EverYoung has quit [Ping timeout: 245 seconds]
EverYoung has joined #ste||ar
EverYoung has quit [Remote host closed the connection]
<zao>
hkaiser: Regarding the hello_world problems on Windows (VS 2015), I managed to attach a debugger to node0 after loop-running it to get a crash after a few hours.
<hkaiser>
clang complains about this if there are virtual functions that are marked override but others are not
<hkaiser>
or it's on a branch, currently - I remember working on this
RostamLog has joined #ste||ar
<simbergm>
zao: nice, can you get it to hang with just one locality? (trying myself just now)
<zao>
Windows box is currently running two localities and one thread each.
<zao>
So the converse of your suggestion :)
<simbergm>
ok :) one locality is easier to debug, that's all
<simbergm>
K-ballo: my intention is to keep master stable also after the release
<simbergm>
don't know if I have the authority to enforce that though
<simbergm>
and I know our PR testing is still a bit lacking, but that's a matter of getting more builders running
<K-ballo>
that's not the same "stable" I meant
<simbergm>
ok, good (I think)
<simbergm>
(if I understand you correctly)
<K-ballo>
in preparation for a release one is supposed to hold the big changes and only make minor commits with a low probability of disturbing the release
<simbergm>
do you feel held back because of the release?
<K-ballo>
whereas right after a release is the ideal time to do big changes with huge breakage potential
<simbergm>
PRs can still be opened, they don't need to be merged
eschnett has quit [Quit: eschnett]
nikunj has joined #ste||ar
aserio has quit [Ping timeout: 276 seconds]
parsa has quit [Quit: Zzzzzzzzzzzz]
anushi has quit [Remote host closed the connection]
anushi has joined #ste||ar
twwright_ has joined #ste||ar
twwright has quit [Read error: Connection reset by peer]
twwright_ is now known as twwright
EverYoung has joined #ste||ar
hkaiser has quit [Quit: bye]
parsa has joined #ste||ar
parsa has quit [Client Quit]
Smasher has joined #ste||ar
eschnett has joined #ste||ar
parsa has joined #ste||ar
parsa has quit [Quit: Zzzzzzzzzzzz]
EverYoung has quit [Remote host closed the connection]
EverYoung has joined #ste||ar
parsa has joined #ste||ar
parsa has quit [Client Quit]
parsa has joined #ste||ar
parsa has quit [Quit: Zzzzzzzzzzzz]
parsa has joined #ste||ar
vamatya has joined #ste||ar
parsa has quit [Client Quit]
nanashi64 has joined #ste||ar
hkaiser has joined #ste||ar
nanashi55 has quit [Ping timeout: 240 seconds]
nanashi64 is now known as nanashi55
CaptainRubik has quit [Ping timeout: 260 seconds]
nikunj has quit [Ping timeout: 260 seconds]
david_pfander has quit [Ping timeout: 240 seconds]
msca8h has joined #ste||ar
<msca8h>
hi! is there a mentor for the "Applying machine learning techniques.." project?
<msca8h>
anyone?
<hkaiser>
hey
<hkaiser>
msca8h
<hkaiser>
there is diehlpk_work, also Zahra (but she is not here, wtm)
<hkaiser>
atm*
<diehlpk_work>
Yes
<msca8h>
a hi
<diehlpk_work>
Hi
<msca8h>
I'm Ray Kim from the mailing list
<diehlpk_work>
Ok, how can I help?
EverYoun_ has joined #ste||ar
<msca8h>
about the 'output' of the machine learning method
<msca8h>
I wanted to ask,
<msca8h>
If I would be to predict prefetching distance too,
EverYoung has quit [Ping timeout: 260 seconds]
<msca8h>
I think I would need features related to memory consumption patterns of
<msca8h>
the task
<msca8h>
But assuming only runtime dynamic features, do you think there are features sufficient to make a prediction on prefetching?
<diehlpk_work>
msca8h, I think memory consumptions pattern are not easy to use
<diehlpk_work>
We only know them for specific algorithms and when we use a for loop we can not easily estimate the memory the user allocated there
<msca8h>
Yes. The do you think it's possible to do something about prefetching?
<msca8h>
If not, only prediction chunk size would be possible I guess.
<diehlpk_work>
You could predict the hreshold for single or multiple threads
<diehlpk_work>
Chunksize
<msca8h>
Those are definitely on the list
<diehlpk_work>
You can propose any attribute you think cna be predicted
<msca8h>
I was thinking about prefetching which is mentioned on Zhara's paper
<diehlpk_work>
Sure, you could add this
<diehlpk_work>
You could think to use memory for algorithms you know
<diehlpk_work>
For a specific parallel algorithm , you could try to use the information about memory, but I am not sure how this would help
<msca8h>
I'll clarify my question. Do you think there are runtime features that can help predicting prefecthing distance?
<diehlpk_work>
Have to think more about it
<msca8h>
My original judgement was that, features that contain memory related information should be required
<msca8h>
But since there are no such runtime informationm
<msca8h>
came to this question
<diehlpk_work>
For a specific algorithm you can compute the memory on run time
<msca8h>
Yes but that won't be work for the current API of for_each right?
<diehlpk_work>
You could provide different models for some of these algorithms, if you like to consider memory
<diehlpk_work>
But somehow your model would depend on the CPU type
<msca8h>
Ah ok I could consider that
<diehlpk_work>
Because you have to consider details like chace size and so on for different cpu
<msca8h>
yes I see
EverYoung has joined #ste||ar
<diehlpk_work>
I think for a fore_each loop, we should not consider memory
<msca8h>
then for for_each, maybe predicting prefecthing won't be possible (in case of using only dynamic features)
<diehlpk_work>
Even with non dynamic features it will be difficult
EverYoun_ has quit [Ping timeout: 252 seconds]
<diehlpk_work>
Because one could allocate memory inside the loop and write data there
<msca8h>
lol I see
<msca8h>
Ok then in my proposal I will only consider predicting chuck size and parallelizment threshold
<diehlpk_work>
Sure, you could add the idea with the meomry for a specific algorithms too
<msca8h>
I think that's a good one
<msca8h>
thanks for your time
<diehlpk_work>
You are welcome
<diehlpk_work>
When do you intent to start to write your proposal?
<msca8h>
I already have an outline, I should start any moment I have time
<diehlpk_work>
Ok, please share the proposal soon with us
<diehlpk_work>
We will read it and provide you with constructive feedback
<msca8h>
Before the apply date?
<diehlpk_work>
Yes
<msca8h>
Should I just post it on the mailing list?
<diehlpk_work>
No, send it to me and I will share it with the other mentors
aserio has joined #ste||ar
<diehlpk_work>
Send me a link to a google doc and we will provide our remarks there
<msca8h>
Ok. I'll come bach with a proposal. Thanks for everything
<diehlpk_work>
Perfect, we are lookign forward to review it
msca8h has quit [Remote host closed the connection]
EverYoung has quit [Remote host closed the connection]
EverYoung has joined #ste||ar
<github>
[hpx] hkaiser force-pushed fixing_3182 from 3aa3283 to 7b7c183: https://git.io/vAaOj
<github>
hpx/fixing_3182 7b7c183 Hartmut Kaiser: Fixing return type calculation for bulk_then_execute....
<zao>
1thread 2localities has run for 15000 iterations now, that's nice.
* zao
flips to 2thread 1loc
victor_ludorum has joined #ste||ar
<zao>
5.7k runs, no explosions with one locality doing two threads yet.
mcopik has joined #ste||ar
<hkaiser>
zao: stress testing things - huh?
<zao>
hkaiser: I cannot get hello_world on Windows to break the way it did with two threads in each of two localities, when running with -t1 -l2 nor -t2 -l1
<zao>
That is, -t2 -l2 is the breaking case, the other two cases have yet to break.
aserio has quit [Ping timeout: 256 seconds]
<github>
[hpx] victor-ludorum opened pull request #3206: Addition of new Count arithmetic performance counter (master...count_arithmetic_performance_counter) https://git.io/vA1lu
parsa has joined #ste||ar
parsa has quit [Quit: Zzzzzzzzzzzz]
parsa has joined #ste||ar
hkaiser has quit [Quit: bye]
parsa has quit [Quit: Zzzzzzzzzzzz]
parsa has joined #ste||ar
aserio has joined #ste||ar
eschnett has quit [Quit: eschnett]
hkaiser has joined #ste||ar
victor_ludorum has quit [Ping timeout: 260 seconds]
<aserio>
hkaiser: I have got a good rough draft of the February report up
<aserio>
after you add your changes I will create the blog post
<hkaiser>
aserio: ok, thanks, could you give me the link, pls?
<hkaiser>
(or send an email with the link)
<aserio>
hkaiser:
<aserio>
you should have it in the pm
<hkaiser>
got it, tks
<zao>
simbergm: I can pretty much guarantee that you need both two threads and two localities to trigger hello_world on Windows.
eschnett has joined #ste||ar
<zao>
Reducing either to 1 lets it run without crashing for at least 5x more attempts.
<zao>
Ooh, -l2 -t6 hit a failure on the 24th run already.
<zao>
hkaiser: 0x00007ffa9e60de70 "f:\\dd\\vctools\\crt\\crtw32\\stdcpp\\thr\\mutex.c(173): unlock of unowned mutex"
<hkaiser>
grrr
<hkaiser>
during shutdown again?
<zao>
No!
<hkaiser>
I might just let this mutex leak :/
<zao>
Several threads alive, doing assorted stuff.
<hkaiser>
this as well:https://github.com/STEllAR-GROUP/phylanx/blob/5f103fa9519aaefff2653612dd60b110d6366ee6/tests/unit/execution_tree/primitives/shuffle_operation.cpp#L29