aserio changed the topic of #ste||ar to: STE||AR: Systems Technology, Emergent Parallelism, and Algorithm Research | stellar.cct.lsu.edu | HPX: A cure for performance impaired parallel applications | github.com/STEllAR-GROUP/hpx | Buildbot: http://rostam.cct.lsu.edu/ | Log: http://irclog.cct.lsu.edu/
hkaiser[[m]] has quit [Remote host closed the connection]
hkaiser[m] has joined #ste||ar
hkaiser[m] has quit [Remote host closed the connection]
hkaiser[m] has joined #ste||ar
hkaiser[m] has quit [Remote host closed the connection]
hkaiser[m] has joined #ste||ar
anushi has quit [Remote host closed the connection]
anushi has joined #ste||ar
eschnett has quit [Quit: eschnett]
hkaiser has quit [Quit: bye]
anushi has quit [Ping timeout: 276 seconds]
anushi has joined #ste||ar
K-ballo has quit [Quit: K-ballo]
hkaiser has joined #ste||ar
hkaiser has quit [Client Quit]
nanashi55 has quit [Ping timeout: 256 seconds]
nanashi55 has joined #ste||ar
eschnett has joined #ste||ar
CaptainRubik has joined #ste||ar
CaptainRubik has quit [Ping timeout: 260 seconds]
jaafar has quit [Ping timeout: 256 seconds]
<jbjnr> <tumbleweed>
EverYoung has joined #ste||ar
EverYoung has quit [Ping timeout: 252 seconds]
CaptainRubik has joined #ste||ar
<heller_> jbjnr: swoosh
<jbjnr> and zoom
<heller_> lots of insight
david_pfander has joined #ste||ar
<jbjnr> wait_or_add_new?
<jbjnr> <cough>
<jbjnr> anyway. I fpound a bug in my stuff, so nothing matters any more
<heller_> wait_or_add_new has to wait ...
<heller_> jbjnr: I am pretty certain that wait_or_add_new is not your culprit
<jbjnr> don't care. Just want to see lesss cruft in the code :)
<heller_> :P
<jbjnr> clean code = nice code!
<heller_> I agree
<jbjnr> I still don't know what wait or add new does -that's why I hate it!
<heller_> you have every reason!
<CaptainRubik> (Regd: Concurrent Data Structure ) Hi, we only need implementations of non intrusive data structures from libcds right?
<CaptainRubik> Also, I have been reading this https://www.research.ibm.com/people/m/michael/ieeetpds-2004.pdf and it seems that the first thing to do should be to implement hazard pointers in hpx
<CaptainRubik> since it is extensively used in libcds implementation.
<heller_> I think intrusive data structures are very interesting as well
<heller_> nevertheless, keep in mind, that you have to narrow down your proposal such that you can actually implement everything you promised
<CaptainRubik> I assumed that because of the template style coding in hpx. Anyway it is just a matter of search and replaces once we have the non intrusive one in place.
<jbjnr> CaptainRubik: harzard pointers would be needed for sure.
<heller_> implementing hazard pointers sounds like a full project
<jbjnr> well, if the code can be simply 'ported from libcds' then it should not be too bad.
<jbjnr> ideally we want to use the libcds code - and just add #ifdefs for hpx ... if that's feasible.
<jbjnr> though I suppose some of the basic buiding blocks/primirtives like that we will want in hpx in pure/clean form
<heller_> yeah, and I am also unsure how "modern" libcds is, as in proper move semantics etc
<github> [hpx] StellarBot pushed 1 new commit to gh-pages: https://git.io/vABjb
<github> hpx/gh-pages e0bbf54 StellarBot: Updating docs
<jbjnr> heller_: is background_work supposed to be a high_priority thread?
<heller_> jbjnr: I think so, yes
<jbjnr> all the apex tasks are coming out as high_profile too. odd
<heller_> jbjnr: on the topic of your fine grained problems ... did you eventually profile a small and larger block size run and compared the differences?
<heller_> that is, not with APEX but with vtune or hpctoolkit or even perf?
<jbjnr> got sidetracked by finding a differnt bug. tasks are not running at the correct priority
<jbjnr> hence my queries
<heller_> i see
<CaptainRubik> @heller_ : I checked the documentation for libcds. It has move semantics.
<github> [hpx] msimberg opened pull request #3183: Add runtime start/stop, resume/suspend and OpenMP benchmarks (master...suspension-benchmarks) https://git.io/vARIq
<jbjnr> CaptainRubik: good
<CaptainRubik> Regarding #ifdefs I have never tried porting that before. My understanding of the project is that I will have to retype it line by line after understanding the fundamental algorithms.
<CaptainRubik> porting like that*
<jbjnr> have a look at the hazard pointer code and see what it does - I've never looked myself, so can't make a guess - but if it requires stuff that's already in HPX, then we should try to use the existing HPX code, if it does, then it might be worth using #ifdef LIBCBS_HAVE_HPX and then switch the code out to use our stuff where possible. I'm not expecting hazard pointer to use locks or mutexes etc, but some other code might benefit from
<jbjnr> this approach
apsknight has joined #ste||ar
<CaptainRubik> Ok I will check the docs
<jbjnr> check the code!
apsknight has left #ste||ar [#ste||ar]
<CaptainRubik> Sure. It just gets too confusing at beginning to directly look at the code :P
<CaptainRubik> Will do though.
<jbjnr> understood
CaptainRubik has quit [Quit: Page closed]
<simbergm> jbjnr: what's the latest working version of hpx + hpx_mpi_linalg? I'd like to go through the exercise of wrapping it in a library, suspending, resuming etc. (+ it will need to be done eventually anyway)
<simbergm> and I'm curious to take a look at your scheduler
<jbjnr> how about if I come to ethz one day soon?
<simbergm> that works too :)
<jbjnr> quite a bit of my stuff is not committed and my guided pool hpx branch is a total mess
<jbjnr> need to get it in before the release :)
<jbjnr> along with all my scheduler hints stuff
<simbergm> aha, you think that'll still happen? how long do you expect for that?
<jbjnr> what? the release or getting my stuff in?
<simbergm> getting the stuff in before the release :)
<simbergm> (I know we decide when the release will be)
<jbjnr> exactly!
<jbjnr> let me fix this priority issue and then talk again ...
<jbjnr> tryint to watch olypics, write code, and chat on irc at once. tricky
<simbergm> sure, concentrate now!
<heller_> we should release soon though
<heller_> jbjnr: is it C++ olympics?
<simbergm> heller_: yeah, I know...
<simbergm> :P
<simbergm> we can always do more releases
<jbjnr> more releases! that's heresy
<simbergm> no! it's good
<simbergm> it means master has to stay green
<jbjnr> now that we have decent testing, I'm all for it
<simbergm> good
* jbjnr pats himself on the back
<simbergm> you deserve that
<jbjnr> lol
<heller_> good boy
<simbergm> (honestly)
<jbjnr> You've done a good job with all the PRs too. We've made a lot of progress.
<jbjnr> when are we losing heller from the team?
<heller_> not anytime soon
<jbjnr> job much?
<heller_> in the makings ... but i won't get away from HPX soon
<simbergm> heller_: will you have time to try out my branch for skipping suspended threads in schedule_thread?
<jbjnr> got my PhD defense next week! the finish line is near!
<simbergm> don't want to make a PR if it doesn't fix your problem with allscale...
<simbergm> jbjnr: are you prepared?
<heller_> jbjnr: YES!
<jbjnr> not yet. what do I need to do?
<simbergm> don't ask me, never done that
<heller_> depends on your committee i guess
<jbjnr> indede. nor I
<jbjnr> I was planning on just answering questions. not like I need to revise for exams or anything (hopefully)
<jbjnr> lunch ...
<heller_> simbergm: which one is yours?
<heller_> I'll let IBM test it
<simbergm> heller_: good, thanks
<simbergm> it's a bit of a mix now, it tries to schedule on a non-suspended one, but can fall back to the current thread if needed (I do that in the scheduling loop)
<heller_> simbergm: muchas gracias
<heller_> I'll tell you once I hear back
<simbergm> thanks!
K-ballo has joined #ste||ar
EverYoung has joined #ste||ar
nanashi55 has quit [Ping timeout: 240 seconds]
nanashi55 has joined #ste||ar
EverYoung has quit [Ping timeout: 240 seconds]
hkaiser has joined #ste||ar
apsknight1 has joined #ste||ar
apsknight1 is now known as apsknight
hkaiser[m] has quit [Ping timeout: 256 seconds]
<github> [hpx] msimberg opened pull request #3184: Fix nqueen example (master...fix-nqueen) https://git.io/vARCP
apsknight has quit [Quit: apsknight]
<zao> 11/136 failures of test_migrate_busy_component, decent hit rate :)
<hkaiser> heller_: so your patch didn't fix anything, just made it worse :/
<heller_> hkaiser: I wonder why it didn't show up anymore on my local machine or on buildbot
<hkaiser> shrug, no idea
<heller_> did show up on buildbot
<zao> (this is my Ryzen on a debian container, moderate Clang version)
<heller_> and do we know if it made it worse?
<hkaiser> heller_: it didn't fail that often, did it?
<hkaiser> but it may not have made it worse, I have no data
<heller_> it did, which was the reason why i looked into it in the first place
<hkaiser> k
<heller_> would be interesting to compare though
<heller_> before the PR was merged, I ran my patch for 8 hours straight without a single failure
<heller_> which was not the case before... that's my data point
<hkaiser> ok, I take it back then
<zao> What PR was this?
<heller_> zao: #3164
<heller_> a comparison would indeed be interesting
<github> [hpx] K-ballo opened pull request #3185: Remove unused scoped_unlock, unlock_guard_try (master...unlock_guard) https://git.io/vARl8
<hkaiser> simbergm: I'll try to look into #3182 asap
<simbergm> hkaiser: thanks!
mcopik has joined #ste||ar
eschnett has quit [Quit: eschnett]
eschnett has joined #ste||ar
<Antrix[m]> Are there functions/classes/namespaces which dont have dependency in hpx code base ?
<heller_> what are you looking for?
<K-ballo> start porting from the leafs?
<Antrix[m]> yes
<Antrix[m]> K-ballo: This is exactly what I am trying to do
<Antrix[m]> Future and promise are dependent on too many things
<heller_> what are you after?
<K-ballo> they are, which is why I suggested porting fake futures and promises
<K-ballo> future and promise depend on too many things for their implementation, not that much for their interfacs
<Antrix[m]> heller_: I am trying to port hpx functions to python3
<heller_> I see
<heller_> is porting really the right word here?
<K-ballo> uhm
<K-ballo> yeah, I was going to note that
<K-ballo> I've been saying export or expose or something
<heller_> you want to make python3 interface with hpx futures, no?
<Antrix[m]> Yes
<Antrix[m]> python3 interface for hpx
<Antrix[m]> futures to start with
<K-ballo> a port implies reimplementing, we just want to make the existing implementation available from python
EverYoung has joined #ste||ar
<Antrix[m]> Oh I see. No I am not porting hpx. I will die doing it ?.
<K-ballo> precisely :P
<heller_> Antrix[m]: I would start with making myself familar with python C++ bindings, write some small examples to get a feeling
<K-ballo> so, all the implementation dependencies don't matter
<Antrix[m]> the class implementation of promise has dependency of promise_base and that also has further dependencies. I will have to make interfaces for all of them right ?
<hkaiser> Antrix[m]: no
<K-ballo> no
<K-ballo> only interfaces have to be interfaced
<Antrix[m]> Oh
<K-ballo> that said, promise_base might contribute to the interface of promise, it's not an implementation detail
<heller_> Antrix[m]: don't start with any HPX code write away. make yourself comfortable with pybind11 first
<heller_> if you already are: great!
<hkaiser> K-ballo: shouldn't be separately exposed, though
<K-ballo> nod, contributes to
EverYoung has quit [Ping timeout: 276 seconds]
<hkaiser> right
<heller_> Antrix[m]: do you know what the GIL is?
<Antrix[m]> heller_: I looked at examples in the pybind11 documentation
<Antrix[m]> global interpreter lock ? No I dont
<Antrix[m]> Ok, I see what you are pointing at. I will read the documentation for GIL in pybind11
<github> [hpx] msimberg opened pull request #3186: Remove hierarchy, periodic priority and throttling schedulers (master...remove-schedulers) https://git.io/vARuX
hkaiser has quit [Quit: bye]
CaptainRubik has joined #ste||ar
<diehlpk_work> heller_, see pm
<heller_> Antrix[m]: you won't be able to fix the GIL, you just need a strategy to work with it
<Antrix[m]> heller_: Oh I see
nanashi55 has quit [Ping timeout: 265 seconds]
nanashi55 has joined #ste||ar
galabc has joined #ste||ar
galabc has quit [Ping timeout: 260 seconds]
eschnett has quit [Quit: eschnett]
hkaiser has joined #ste||ar
aserio has joined #ste||ar
galabc has joined #ste||ar
<jbjnr> hkaiser: heller_ is ther a setting to control how many tasks get stolen at a time?
<jbjnr> I remember seeing one
<hkaiser> jbjnr: hold on
<jbjnr> min_tasks_to_steal_pending = ${HPX_THREAD_QUEUE_MIN_TASKS_TO_STEAL_PENDING:0}
<jbjnr> min_tasks_to_steal_staged = ${HPX_THREAD_QUEUE_MIN_TASKS_TO_STEAL_STAGED:10}
<jbjnr> found these. I think they are the ones I am after
<hkaiser> jbjnr: precisely
<jbjnr> doI se these using -Ihpx.min_tasks_to_steal_pending=0 or sometihg on the command line
<jbjnr> hpx.thread_queue.min_tasks_to_steal_pending
<jbjnr> I see
<galabc> Hi I have a question concerning the two logistic regressions used in the article https://arxiv.org/pdf/1711.01519.pdf
<galabc> A binary logistic regression is used to find the optimal execution policy (sequential or parralel)
<galabc> And a multinomial regression is used to find the optimal chunk size and prefetching distance
<galabc> First of, I was wondering if the multinomial regression was only used when the execution policy was chosen to be parralel
jaafar has joined #ste||ar
<diehlpk_work> galabc, I think you have to ask the authors for this specific questions
<diehlpk_work> hkaiser, I think this questions is for you :)
<hkaiser> galabc: yes
<hkaiser> galabc: all of this was an experiment and by no means exhaustive
<galabc> hkaiser, I was wondering if only one regression could be used
<galabc> one that gives you the execution policy and in the case the policy is parallel, also gives you the chunk size and prefectching distance
<galabc> It could be trained to give chuckzise=0 prefetching distance=0 if the execution chosen is sequential
<galabc> But I'm not sure
<hkaiser> galabc: sure, worth a try
<hkaiser> well
<hkaiser> LRA can give you a yes/no answer only, while the multinomial stuff can give you more choices
<galabc> but I assume a multinomial regression could also give yes or no
<galabc> as two different values
<galabc> it could output
<galabc> execution policy:0 (sequential) chick size:0 prefectching distance:0
<galabc> execution policy:1 (parralel) chunk size:value other than zero prefectching distance:value other than zero
<galabc> It could be trained to always give those kind of answers
<galabc> Do it nevers gives
<galabc> execution policy:sequential chunk size:value other than zero
<galabc> But I assume it would take a lot of data
<galabc> hkaiser what do you think?
nanashi55 has quit [Ping timeout: 240 seconds]
eschnett has joined #ste||ar
<diehlpk_work> I think that one model is interesting, and you could try to get this model to work. Maybe one model for different parallel algorithms?
nanashi55 has joined #ste||ar
<diehlpk_work> I think when the model only have to predict one algorithm, it could be maybe easier to develop and train this model
<hkaiser> galabc: could work
victor_ludorum has joined #ste||ar
<hkaiser> galabc: as I said it was an experiment
<galabc> Ok I will concentrate my research on that for now
<galabc> I think his could be my summer project
<galabc> But I need to understand more how multinomial regression works
<galabc> And I will also do research on other machine learning algorithms that can output other values than 0 and 1
<diehlpk_work> Yes, we are openminded how this problem could be solved and not focused on one solution
<galabc> Cause I'm a bit new to the subject
<diehlpk_work> galabc, GSoC is there to learn new things, not all students were experts in the topics
<galabc> diehlpk_work, you are right :)
<galabc> I think I will use python librairies to experiment before bulding a model
<diehlpk_work> As long as you are willing to learn new things and invest time it is fine
<diehlpk_work> @galabc, http://scikit-learn.org/stable/
<galabc> thank you
<galabc> I'm starting to get a better grasp of the project
galabc has quit []
<Antrix[m]> I think for template classes, one has to make all the possible types explicit for pybind11. https://github.com/pybind/pybind11/issues/199
nanashi55 has quit [Ping timeout: 256 seconds]
<hkaiser> Antrix[m]: yes
<hkaiser> but you can play with templates in the bindings anyway
<Antrix[m]> hkaiser: Meaning?
<Antrix[m]> Do you know some small library being interfaced to python using pybind11 ? I think I need rigorous examples. the documentation is just scratches the surface
nanashi55 has joined #ste||ar
nanashi64 has joined #ste||ar
nanashi55 has quit [Ping timeout: 256 seconds]
<hkaiser> Antrix[m]: look at their test suite, that's pretty exhaustive
<Antrix[m]> hkaiser: Oh Ok. thanks ?
nanashi64 has quit [Ping timeout: 240 seconds]
nanashi55 has joined #ste||ar
simbergm has quit [Ping timeout: 240 seconds]
<zao> Bah, can't grep for pybind11 in my EasyBuild recipes, seems like it's mostly pulled in via pip.
EverYoung has joined #ste||ar
EverYoung has quit [Remote host closed the connection]
EverYoung has joined #ste||ar
CaptainRubik has quit [Ping timeout: 260 seconds]
<victor_ludorum> Here I found that the comment says about base iterator from local iterator , Can anybody explains how ??
<victor_ludorum> Actually at that point there is a function like this "static local_raw_iterator base(local_iterator it)" which have been updated in the PR1346
<victor_ludorum> So, I just want to know is this any doc error or something which I don't know .
EverYoung has quit [Remote host closed the connection]
EverYoung has joined #ste||ar
<hkaiser> victor_ludorum: hey
<hkaiser> that's a long story
<victor_ludorum> Okay Sir!! No problem. :)
<hkaiser> partitioned vector exposes 5 different iterator types (instead of just one exposed from std::vector)
<hkaiser> that's because of it's distributed nature
<hkaiser> victor_ludorum: do you understand what partitioned_vector does?
<victor_ludorum> actually Sir, I am reading your PR . and understanding what you have done in that .
<victor_ludorum> and learning about all those things
<hkaiser> victor_ludorum: you do know that the partitioned vector stuff is unrelated to the performance counters you mentioned in your email, do you?
<victor_ludorum> Yeah Sir!! I know that. But I am thought it would be nice If I can implement that too. Because most of them have been already implemented
<victor_ludorum> and Arithmetic performance counter is linked with Histogram , But some other guy have proposed to work on that
<victor_ludorum> Therefore, I have thought to work on that too .
EverYoung has quit [Remote host closed the connection]
EverYoung has joined #ste||ar
<hkaiser> victor_ludorum: what did you have in mind?
<hkaiser> parallel algorithms for segmented containers?
<victor_ludorum> Sir, I want to work on arithmetic Counters
<hkaiser> victor_ludorum: ok, then you lost me :/
<victor_ludorum> Sir, actually That project is linked with histogram creation so I have thought that project has less work to do therefore I have thought to implement rest parallel algorithms one or two has been left unimplemented
<hkaiser> color me confused
<hkaiser> victor_ludorum: ok
<victor_ludorum> sorry sir
<hkaiser> but that has still no relation to partitioned_vector
<hkaiser> do you have any experience with sorting algorithms?
<victor_ludorum> Okay sir
<hkaiser> or even parallel sorting?
<victor_ludorum> Yeah sir!! I know sorting algorithms
<victor_ludorum> But I am learning about parallel algorithms
<hkaiser> ok
<hkaiser> parallel sorting is hard
<victor_ludorum> Okay sir
<victor_ludorum> Thanks sir for your advice So, I will work on arithmetic counters only .
<victor_ludorum> as someone has previously stated there desire for histogram, so I can't take that :(
<victor_ludorum> And I am extremely sorry for being unclear, I don't have much knowledge I have thought that I can complete it but sorry :/
<hkaiser> victor_ludorum: I wouldn't outright drop the ball if I was you
<hkaiser> in the end it depends on the quality of the proposals submitted
<hkaiser> those will be the basis for making the decision
<victor_ludorum> Oh.. I got your point , I will focus on my proposal
<victor_ludorum> If I will make good proposal then my chances will be high thanks sir :) for advice
EverYoung has quit [Remote host closed the connection]
EverYoung has joined #ste||ar
<victor_ludorum> But I have one question that what we would suppose to have in arithmetic performance counters as in the issue it is related to statistical properties, But Sir(Mikael Simberg) has suggested for log and exp function
EverYou__ has joined #ste||ar
EverYoung has quit [Ping timeout: 240 seconds]
<victor_ludorum> in the hpx-user group .
<zao> Parallel sorting? I still have nightmares from implementing snake-sort in my parallel algorithms course.
<heller_> there can also be two projects on the same topic
<victor_ludorum> Oh.. thats nice
<heller_> hkaiser: regarding checkpointing: yes conflating id splitting and serialization is a problem. The other problem is that you seem to want different semantics: shallow vs. deep serialization of clients, for example
<heller_> the id splitting is partially taking care of by having the different archives
<heller_> the different semantics is more problematic
<hkaiser> heller_: no, that's not a serialization problem
<hkaiser> checkpointing simply uses the mechanism of serialization for its functionality and will never need to serialize an id_type
<hkaiser> it will always dereference th eid_type and serialize the component instead
<hkaiser> heller_: also, we can't use different archives as many serialization functions are specialized for output_archive
aserio has quit [Ping timeout: 255 seconds]
<heller_> not archives, containers
<heller_> our archives are type erased over the containers
aserio has joined #ste||ar
<heller_> and yes, the always dereferencing the id_type and serialize the component is exactly what I was talking about
<heller_> there is no way to express that in the current serialization framework
<heller_> those are the two different semantics for the same operation I was talking about
<heller_> NB: We'll get a global EDF scheduler soon, with executors to specify deadlines for tasks
<hkaiser> nice
<hkaiser> we need to disentangle serialization from special handling of id_types
<hkaiser> then we can change what happens for those, in one case we do splitting in the other dereferencing
<diehlpk_work> victor_ludorum, Are you still interested in the opencv project?
<victor_ludorum> Nope sir sorry I don't get much knowledge on that
<victor_ludorum> sorry :(
<diehlpk_work> No problem, just wanted to know
<victor_ludorum> Thanks sir!! :)
<heller_> hkaiser: id_types must go!
<heller_> hkaiser: back to topic... I don't know how ...
<heller_> the only way I could think of is to take a completely different approach to serialization
<heller_> that is, instead of asking the class to serialize, we should rather ask it to reflect itself
<heller_> and on that hierarchical tuple, we could apply either serialization to a parcel buffer or a checkpoint or whatever
<heller_> but that's a rather drastic breaking change
<K-ballo> sooner or later we'll end up needing "descriptive" serialization
<heller_> would be nice if we could find an API that'd allow us to gradually switch to that
Smasher has joined #ste||ar
david_pfander has quit [Ping timeout: 256 seconds]
vamatya has joined #ste||ar
david_pfander has joined #ste||ar
david_pfander has quit [Ping timeout: 240 seconds]
aserio has quit [Ping timeout: 240 seconds]
aserio has joined #ste||ar
hkaiser has quit [Quit: bye]
EverYoung has joined #ste||ar
EverYou__ has quit [Ping timeout: 276 seconds]
eschnett has quit [Quit: eschnett]
aserio has quit [Quit: aserio]
EverYoung has quit [Remote host closed the connection]
EverYoung has joined #ste||ar
bibek has quit [Quit: Leaving]
hkaiser has joined #ste||ar
Smasher has quit [Remote host closed the connection]
victor_ludorum has quit [Ping timeout: 260 seconds]
EverYoun_ has joined #ste||ar
EverYoung has quit [Read error: Connection reset by peer]
EverYoung has joined #ste||ar
EverYoung has quit [Remote host closed the connection]
EverYoung has joined #ste||ar
EverYoun_ has quit [Ping timeout: 252 seconds]
EverYoun_ has joined #ste||ar
EverYoung has quit [Ping timeout: 276 seconds]
EverYoun_ has quit [Remote host closed the connection]
EverYoung has joined #ste||ar
<jbjnr> hkaiser: would you have time for a skype chat tomorrow maybe?
<hkaiser> jbjnr: absolutely