aserio changed the topic of #ste||ar to: STE||AR: Systems Technology, Emergent Parallelism, and Algorithm Research | stellar.cct.lsu.edu | HPX: A cure for performance impaired parallel applications | github.com/STEllAR-GROUP/hpx | Buildbot: http://rostam.cct.lsu.edu/ | Log: http://irclog.cct.lsu.edu/
gedaj has quit [Remote host closed the connection]
gedaj has joined #ste||ar
Smasher has joined #ste||ar
gedaj has quit [Remote host closed the connection]
gedaj has joined #ste||ar
hkaiser has quit [Quit: bye]
gedaj has quit [Remote host closed the connection]
gedaj has joined #ste||ar
hkaiser has joined #ste||ar
hkaiser has quit [Quit: bye]
vamatya has joined #ste||ar
eschnett has quit [Quit: eschnett]
nanashi55 has quit [Ping timeout: 256 seconds]
nanashi55 has joined #ste||ar
gedaj has quit [Remote host closed the connection]
gedaj has joined #ste||ar
vamatya has quit [Ping timeout: 248 seconds]
EverYoung has joined #ste||ar
EverYoung has quit [Ping timeout: 276 seconds]
gedaj has quit [Remote host closed the connection]
gedaj has joined #ste||ar
vamatya has joined #ste||ar
gedaj has quit [Remote host closed the connection]
gedaj has joined #ste||ar
jaafar_ has quit [Ping timeout: 248 seconds]
vamatya has quit [Quit: Leaving]
<simbergm>
good morning heller_, did you decide in the end to drop boost < 1.58?
<heller_>
simbergm: good morning
<heller_>
I am not sure what the final decision was
<heller_>
I am searching for a proper small_vector replacement now
<simbergm>
heller_: okay, nice
<simbergm>
then there were some problems with gcc49 and boost 158 and 159 as well
<heller_>
looking into it right now
<simbergm>
all I can tell so far is there's a lock not being unlocked before an assert
<simbergm>
but that's not the cause
<simbergm>
thanks
marco has joined #ste||ar
david_pfander has joined #ste||ar
gedaj has quit [Remote host closed the connection]
gedaj has joined #ste||ar
gedaj has quit [Remote host closed the connection]
gedaj has joined #ste||ar
gedaj has quit [Remote host closed the connection]
gedaj has joined #ste||ar
<jbjnr>
heller_: yt?
<heller_>
jbjnr: hey
<jbjnr>
hiya -question - are there any branches I need to try that improve small task times?
<jbjnr>
(I see thread_destruction ...)
EverYoung has joined #ste||ar
<heller_>
jbjnr: yes, thread_destruction is the latest
<heller_>
got caught up with a cold and the problem with small_vector and boost < 1.58
<jbjnr>
thanks
EverYoung has quit [Ping timeout: 276 seconds]
gedaj has quit [Remote host closed the connection]
gedaj has joined #ste||ar
<heller_>
I really don't want to code up a small_vector now :/
<simbergm>
heller_: it could wait (i.e. we can still revert), depends how desperate jbjnr is to have exactly that PR in master
<heller_>
I think the PR should stay
<heller_>
we should bump the required boost versions ;)
<heller_>
I think hartmut agreed it might be a good idea
<simbergm>
I'm not against that at all
<jbjnr>
I agree
<simbergm>
:)
<simbergm>
looking into the stacksize issue, and I think your first patch is probably good, but moving the defines into a separate file messes something up
<jbjnr>
I'm only desperate to have PRs merged if they have been shown to make a difference.
<simbergm>
the test hangs on the second commit for me but not the first on gcc 6
<jbjnr>
I have not tried the PR yet (thread destruction), but heller can tell me if it is significant - or are you talking about the continuation PR thta was merged already
<simbergm>
jbjnr: now its about the one that was already merged
<jbjnr>
Boost : Version 1.58.0 :
<jbjnr>
April 17th, 2015 07:53 GMT
<jbjnr>
I think we can safely upgrade. 2015 was a loing time ago and hpx is supposed to tbe the future
<simbergm>
mmh
<simbergm>
at least ubuntu LTS and debian stable have newer boosts
<zao>
We still pretend to support ancient compilers tho, don't we?
<heller_>
ancient as in clang 3.8 and gcc 4.9
<heller_>
yes
<heller_>
those need to go ;)
<simbergm>
default gcc on ubuntu lts is 4.5
<simbergm>
should change soon though
<zao>
I need to set up some CentOS container at home. My current debian and Ubuntu can't compile 4.9 anymore.
<heller_>
we should probably go for something like a circular buffer
<heller_>
support the latest 3 major versions (or more?), and update dependencies more frequently
<zao>
Can HPX work with system-built Boost anyway?
<heller_>
in principle, sure
<zao>
With regard to the C++ standards used.
Smasher has quit [Ping timeout: 240 seconds]
Smasher has joined #ste||ar
<marco>
hi, short question: There are some changes for the execution policity in parallel::for_earch/parallel::for_loop. Should I use hpx::parallel::execution::par_unseq instead of hpx::parallel::par_vec ?
<jbjnr>
yes
<jbjnr>
(I think, but it hasn't been implemented anyway, so ...)
<jbjnr>
(I mean the vectorization isn't in place, only the policy)
<marco>
thx
gedaj has quit [Remote host closed the connection]
gedaj has joined #ste||ar
<heller_>
marco: where you able to resolve your performance issue?
<heller_>
did the linearization help?
<heller_>
jbjnr: back in lugano?
<jbjnr>
yes. home now
<heller_>
welcome back
<heller_>
how was the trip? succesful?
<jbjnr>
main interesting news is that these guys http://icl.utk.edu/slate/#overview are producing a new linear algebra library in c++ - instead of the old slalapack and everything based on the original fortran). We might get involved using HPX
<jbjnr>
these are the magma/plasma guys and the new slate library will be a rewrite of it all for exascale
<heller_>
nice!
<heller_>
how come they don't want to with plasma/magma?
<jbjnr>
not for them. they are targetting openmp and openacc (poor bastards)
<heller_>
oh gosh
<jbjnr>
they are the authors of magma/plasma
<marco>
heller_ : The linearization did not help. But I will check the behavior with a reimplemented test version. We had the same behavior with an OpenCL implementation, ...
<heller_>
ok
<heller_>
interesting
<jbjnr>
this will be the new magma/plasma++ using pure c++ all the way - except they will use #pragma shite
<jbjnr>
so since it is open source - we can do an hpx version :)
<jbjnr>
we'll see.
<jbjnr>
they were impressed by cholesky :)
<heller_>
yes!
<heller_>
even though it doesn't match the performance yet ;)?
<jbjnr>
the main problem is (as alwys) politics
<jbjnr>
Raffaele showed the cholesky stuff and limited his talk to the 512 block size, where we do fine.
<heller_>
how comes we don't have a good politician?
<jbjnr>
we're pretty close with 256 and still improving
<jbjnr>
the problem is that the ECP project is the big DOE exascale project, and they fund Tron - so we can't be involved!
<jbjnr>
if hk had ecp funding ....
<jbjnr>
(but he doesn't)
<heller_>
they fund tron oO
<jbjnr>
exactly
<heller_>
interesting
<heller_>
there doesn't seem to be anything official though
<jbjnr>
?
<heller_>
I can't find a press release or anything linking tron to ecp
<github>
[hpx] msimberg created fix-config-typo (+1 new commit): https://git.io/vNh3E
<github>
hpx/fix-config-typo c558cb2 Mikael Simberg: Fix typo in config.hpp
<heller_>
simbergm: #3137 seems to hang on windows
<github>
[hpx] msimberg opened pull request #3142: Fix typo in config.hpp (master...fix-config-typo) https://git.io/vNh3r
<simbergm>
heller_: thanks, I'll have a look (or at least see if it hangs on linux as well)
<simbergm>
jbjnr: did something happen your pycicle instance?
eschnett has joined #ste||ar
<simbergm>
heller_: btw, the thread stacksize problem was just that HPX_DEBUG wasn't defined in the new config header so the debug builds were using release mode stack sizes
<jbjnr>
simbergm: yes. somethign has gone wrong. I might have trashed the pycicle dir on daint when I was testing a new feature - I am running pycicle on DCA as well and I upgraded pycicle to handle other projects (ie non hpx), but I might have written files into the current pycicle dir by mistake.
<simbergm>
jbjnr: okay, good to know
<simbergm>
do you think you'll have it running again soon or should I set mine to do PRs as well?
<jbjnr>
hold on. ket me try to fixy fixy
<simbergm>
okay :)
daissgr has joined #ste||ar
hkaiser has joined #ste||ar
gedaj has quit [Remote host closed the connection]
gedaj has joined #ste||ar
gedaj has quit [Remote host closed the connection]
gedaj has joined #ste||ar
gedaj has quit [Read error: Connection reset by peer]
gedaj has joined #ste||ar
gedaj has quit [Read error: Connection reset by peer]
gedaj has joined #ste||ar
<heller_>
hkaiser: good morning
<heller_>
hkaiser: when I call CreateFiberEx, is the stack being allocated/prepared already within that call or is this happening only when you first switch to that new firber
<zao>
The documentation leads with "Allocates a fiber object, assigns it a stack, and..."
<zao>
ReactOS invokes BaseCreateStack at least.
<heller_>
hmmm
<hkaiser>
heller_: yah, full initialization
hkaiser has quit [Quit: bye]
eschnett has quit [Quit: eschnett]
akheir has quit [Remote host closed the connection]
aserio has joined #ste||ar
eschnett has joined #ste||ar
aserio1 has joined #ste||ar
aserio has quit [Ping timeout: 252 seconds]
aserio1 is now known as aserio
EverYoung has joined #ste||ar
EverYoung has quit [Ping timeout: 252 seconds]
hkaiser has joined #ste||ar
<simbergm>
heller_: this also needs a boost version check, or am I missing something? and an #include <vector> for boost < 1.58?
<Guest96040>
[hpx] msimberg pushed 1 new commit to completion_handler: https://git.io/vNhXX
<Guest96040>
hpx/completion_handler 2394aac Mikael Simberg: Add boost version check for vector/small_vector include in future_data.hpp
<simbergm>
heller_: added the boost version check to the includes on your branch, will try to merge it later tonight
<heller_>
simbergm: merci
EverYoung has joined #ste||ar
david_pfander has quit [Ping timeout: 256 seconds]
tt___ has joined #ste||ar
EverYoung has quit [Remote host closed the connection]
EverYoung has joined #ste||ar
gedaj has quit [Remote host closed the connection]
gedaj has joined #ste||ar
galabc has joined #ste||ar
<galabc>
\whois aserio
<tt___>
I was this about working on these projects for GSOC. 1. Create Generic Histogram Performance Counter 2. Add More Arithmetic Performance Counters. Is there any tutorial I can refer to for using hpx lib and would I need access to a specific hardware for this?
galabc has quit [Quit: Page closed]
<zao>
tt___: I would guess that you don't need anything overly special, as those two tasks seem to be about computing statistics on any source counters.
<zao>
So you could probably do your work for the easy counters, or the ones that need PAPI.
<zao>
(note, I don't know anything about this, just reading through the problem descriptions)
<zao>
I would recommend that you start by building HPX yourself on a PC you're comfortable with. It may be easier to do on a Linux installation if you have one.
<hkaiser>
tt___: but in order to understand what a counter is in hpx, those might be a good starting point for studying
<github>
[hpx] hkaiser created fixing_async (+1 new commit): https://git.io/vNhdt
<github>
hpx/fixing_async 0405dc8 Hartmut Kaiser: Minor changes to how actions are executed. This mostly improves consistency of different APIs.
<github>
[hpx] hkaiser opened pull request #3144: Minor changes to how actions are executed (master...fixing_async) https://git.io/vNhd3
tt___ has quit [Ping timeout: 260 seconds]
EverYoung has quit [Ping timeout: 240 seconds]
EverYoung has joined #ste||ar
jaafar_ has joined #ste||ar
aserio has quit [Ping timeout: 252 seconds]
aserio has joined #ste||ar
mcopik has joined #ste||ar
hkaiser has quit [Quit: bye]
<heller_>
jbjnr: would pycicle also work on windows?
<jbjnr>
heller_: pycicle requires python, and the few python libs (pygithub) as long as you can get them, then all should be ok - and ssh - remote pycicle use needs to be able to ssh out to the build machine.
<jbjnr>
a windows bash shell should be fine
<jbjnr>
sorry. I have been running pycicle on my windows machine for the last 6 weeks - its the machine controlling the daint builds
<jbjnr>
I forgot that because the bash shell is so good that I forgot it was windows
<jbjnr>
:)
<jbjnr>
PS. I've decided to not run pycicle any more to see if any of you lot notice that the PRs are not built any more
<K-ballo>
and did we?
eschnett has quit [Quit: eschnett]
aserio has quit [Quit: aserio]
hkaiser has joined #ste||ar
<jbjnr>
K-ballo: not enough!
gedaj has quit [Read error: Connection reset by peer]
gedaj has joined #ste||ar
hkaiser has quit [Quit: bye]
EverYoun_ has joined #ste||ar
EverYoung has quit [Ping timeout: 252 seconds]
EverYoung has joined #ste||ar
EverYoun_ has quit [Ping timeout: 276 seconds]
EverYoung has quit [Remote host closed the connection]