aserio changed the topic of #ste||ar to: STE||AR: Systems Technology, Emergent Parallelism, and Algorithm Research | stellar.cct.lsu.edu | HPX: A cure for performance impaired parallel applications | github.com/STEllAR-GROUP/hpx | Buildbot: http://rostam.cct.lsu.edu/ | Log: http://irclog.cct.lsu.edu/
nikunj_ has quit [Ping timeout: 260 seconds]
nikunj has joined #ste||ar
parsa has quit [Quit: Zzzzzzzzzzzz]
hkaiser has quit [Quit: bye]
parsa has joined #ste||ar
parsa has quit [Quit: Zzzzzzzzzzzz]
nikunj has quit [Quit: Page closed]
Viraj has joined #ste||ar
Viraj has quit [Ping timeout: 260 seconds]
anushi has joined #ste||ar
K-ballo has quit [Quit: K-ballo]
parsa has joined #ste||ar
parsa has quit [Read error: Connection reset by peer]
parsa| has joined #ste||ar
Anushi1998 has joined #ste||ar
parsa| has quit [Quit: Zzzzzzzzzzzz]
parsa has joined #ste||ar
parsa has quit [Quit: Zzzzzzzzzzzz]
mcopik has quit [Ping timeout: 260 seconds]
<jbjnr> heller: yt? Not expecting an answer on sunday morning, but ...
<jbjnr> I want to create a user for pycicle, so that it can push status settings without my face/account. is there a special type of user on github that I can create that isn't like a normal user?
<jbjnr> i.e. doesn't have any repos etc. just does status setting and that kind of thing
mcopik has joined #ste||ar
mcopik has quit [Ping timeout: 265 seconds]
Anushi1998 has quit [Quit: lunch]
<github> [hpx] StellarBot pushed 1 new commit to gh-pages: https://git.io/vpeHk
<github> hpx/gh-pages 64bc8d9 StellarBot: Updating docs
<heller> jbjnr: there are special app accounts, IIRC
<heller> jbjnr: my internet connection here is more than sloppy though ... so hard to check
Anushi1998 has joined #ste||ar
Anushi1998 has quit [Quit: Leaving]
<jbjnr> heller: ok. I found those, but you need to create a developer thingy and do lots of stuff, so instead i've just created a 'pycicle' user account and will use that if someone adds it to stellar group so it can make changes etc.
<heller> ok
<heller> jbjnr: will do
<github> [hpx] sithhell force-pushed docker_image from a733503 to aa9302b: https://git.io/vxFvP
<github> hpx/docker_image aa9302b Thomas Heller: Fixing Docker image creation...
<heller> jbjnr: done
<heller> nice logo btw
<heller> docker is quite a bitch, btw
Anushi1998 has joined #ste||ar
nikunj has joined #ste||ar
Anushi1998 has quit [Quit: Bye]
K-ballo has joined #ste||ar
hkaiser has joined #ste||ar
<github> [hpx] hkaiser pushed 1 new commit to master: https://git.io/vpeNB
<github> hpx/master fd3278d Hartmut Kaiser: Merge pull request #3284 from STEllAR-GROUP/relaxed_atomics...
<github> [hpx] hkaiser deleted relaxed_atomics at 4ac1ef7: https://git.io/vpeNR
<heller> hkaiser: No flights for me tonight. Staying in Atlanta
<hkaiser> heller: ok
<hkaiser> heller: are you in Atlanta already?
<heller> Start at 300
<heller> No
<hkaiser> you can always ask in Atlanta at the gate
<heller> Sure
parsa has joined #ste||ar
<heller> I reserved a room just in case
<hkaiser> the baggage is the problem, though
<heller> I don't have any checked in
<hkaiser> k
<hkaiser> heller: well, let me know, I'll pick you up
<heller> Flight leaves 1:45 hours
<hkaiser> k
<heller> Would be late for you
<hkaiser> no problem
<hkaiser> the flight leaves 1:45am?
<hkaiser> that's unusual
<heller> No. In 1:45 hours
<hkaiser> heller: delta says the first flight tomorrow leaves at 8:55am
<heller> At 5pm from Amsterdam
<hkaiser> k
<heller> To Atlanta
<heller> Tomorrow the flight leaves at 10
<hkaiser> ok, DL1925?
<heller> Sorry, 10:50
<heller> Yes
<hkaiser> ok, I'll be there
<heller> Thanks!
<hkaiser> looking for to see you again
<hkaiser> forwards
<hkaiser> forward
<heller> Me too!
<heller> How long is the drive from Atlanta to Baton Rouge?
<hkaiser> heller: I'll pick you up at the airport in BR ;-)
<hkaiser> not in ATlanta
<hkaiser> the drive would be 8-10 hours
<hkaiser> one wya
<hkaiser> way
<heller> Sure ;)
<heller> Just checking if that'll be a viable option for me tonight :p
<hkaiser> nah, don't do that
<hkaiser> you can drive from Houston, but not Atlanta
<K-ballo> flight trouble, or did you plan for a night in Atlanta heller?
circleci-bot has joined #ste||ar
<circleci-bot> Success: hkaiser's build (#12792; push) in STEllAR-GROUP/hpx (master) -- https://circleci.com/gh/STEllAR-GROUP/hpx/12792?utm_campaign=chatroom-integration&amp;utm_medium=referral&amp;utm_source=irc
circleci-bot has quit [Client Quit]
<heller> K-ballo: just very bad connections
<hkaiser> K-ballo: I have a question wrt future unwrapping
<hkaiser> K-ballo: the test future_unwrap_878 ensures that if the outer future holding an exception is unwrapped, the exception ends up being in the inner one
<hkaiser> do I misunderstand?
<K-ballo> looking..
<github> [hpx] sithhell force-pushed docker_image from 8bf3ca7 to 00d0767: https://git.io/vxFvP
<github> hpx/docker_image 00d0767 Thomas Heller: Fixing Docker image creation...
<Guest30219> [hpx] sithhell opened pull request #3289: Fixing Docker image creation (master...docker_image) https://git.io/vpeA5
<K-ballo> hkaiser: yes, that's how it looks
<hkaiser> ok
* K-ballo has to learn to link to source from specific commits
<hkaiser> thanks
<hkaiser> K-ballo: is that a coincidential effect or is this deliberate?
<K-ballo> I'd imagine it is deliberate
<K-ballo> the model for implicit unwrapping is as if:
<K-ballo> `outer.then(future inner){ return inner.get(); }` ?
<K-ballo> uhm, that's wrong
<K-ballo> future<future<T>> outer = ...;
<K-ballo> outer.then(future<future<T>> outer) {
<K-ballo> future<T> inner = outer.get();
<K-ballo> return inner.get();
<K-ballo> };
<hkaiser> ok, make sense
<K-ballo> so both outer and inner exceptions would be propagated (implicitly, via .get()), to the unwrapped future
<hkaiser> (I'm trying to optimize unwrap if the outer future is ready, as in this case we can reuse the shared state of th einner future...)
nikunj has quit [Ping timeout: 260 seconds]
<K-ballo> if the outer future is ready and has a value, just return .get()? like that?
<hkaiser> something like that, yes
<hkaiser> but without allocating a new shared state
<hkaiser> but it breaks if the outer future is exceptional
<hkaiser> I'll probably just add a check for has_exception and fall back to the old unwrapping
<K-ballo> do we have split shared_state<R> from a shared_state base that's value-type agnostic?
<K-ballo> nevermind, the future would use shared_state<R>
<hkaiser> yes
<hkaiser> K-ballo: refresh that link, I think this should work
<K-ballo> nod
<hkaiser> the test passes now
<jbjnr> my daughter made me a logo for pycicle in blender https://github.com/pycicle
<jbjnr> I asked for a 3D cog/gear in the colours of cmake/cdash. Not bad!
<zao> That cog looks a bit worn and looks like it'll skip if used. A truthful rendition of the system :P
<jbjnr> <sigh>
<zao> I kid, I kid. Nice job.
nikunj has joined #ste||ar
jaafar has joined #ste||ar
parsa has quit [Read error: Connection reset by peer]
parsa| has joined #ste||ar
ct-clmsn has joined #ste||ar
Anushi1998 has joined #ste||ar
circleci-bot has joined #ste||ar
<circleci-bot> Success: hkaiser's build (#12866; push) in STEllAR-GROUP/hpx (ready_future_unwrap) -- https://circleci.com/gh/STEllAR-GROUP/hpx/12866?utm_campaign=chatroom-integration&amp;utm_medium=referral&amp;utm_source=irc
circleci-bot has quit [Client Quit]
<github> [hpx] K-ballo force-pushed logging from 3a3c671 to 4ad11f6: https://git.io/vx6Yc
<github> hpx/logging 4ad11f6 Agustin K-ballo Berge: pruning util/logging
<hkaiser> jaafar: looks nice, say hello to her - good job
<hkaiser> jbjnr: ^^
<jaafar> There I go again :)
<jaafar> Got to resume my HPX experiments again soon
<hkaiser> jaafar: hey - sorry for poking you
<jaafar> haha no, it's my fault for lurking!
<jaafar> hkaiser: will I see you in Aspen?
<hkaiser> jaafar: no, I won't be able to come (I originally wanted to give a talk but I missed the call for papers :/ )
<jaafar> I'm sorry not to see you!
<hkaiser> jaafar: we might meet at cppcon in Sep
parsa| has quit [Quit: Zzzzzzzzzzzz]
Anushi1998 has quit [Remote host closed the connection]
<github> [hpx] hkaiser force-pushed ready_future_unwrap from b773c62 to 424daa3: https://git.io/vpeug
<github> hpx/ready_future_unwrap 424daa3 Hartmut Kaiser: Do not unwrap ready future...
Anushi1998 has joined #ste||ar
<jbjnr> hkaiser: I am going to submit a talk to cppcon this year hopefully.
diehlpk has joined #ste||ar
<hkaiser> jbjnr: nice
<mbremer> @hkaiser: yt?
<hkaiser> mbremer: here
<mbremer> Ahh, I had some question. I wanted to bug you about.
<hkaiser> sure
<mbremer> The first was with components, and register_as and connect_to.
<mbremer> Presumably these are how components are registered in the agas. I was curious if there were any pitfalls with connect_to.
<mbremer> It's a void function, so I was curious what happens if the object hasn't been registered in AGAS yet?
<hkaiser> it will wait
<hkaiser> the future returned by connect_co will become ready only once the object has been registered
<hkaiser> well, or the client object will become valid only after the object was registered
<mbremer> Yeah, I guess that was what I was worried about. If the client isn't valid yet, and I call an action with it. Presumably that will cause the code to crash?
<mbremer> I.e. do i need to be careful that all of my clients are valid before I start calling invoking actions through them?
<hkaiser> no, it will delay the action invocation until the client becomes ready
<hkaiser> it might block however, iirc
<hkaiser> we should change it such that it doesn't block...
<hkaiser> need to look
<mbremer> What would it do instead? Just have the action return after the client is ready?
<hkaiser> yes
<hkaiser> the action invocation accesses the id stored in the client, which will become available only once the client has become ready
<mbremer> That sounds like the best thing I could think of.
<hkaiser> mbremer: well, it could return without blocking, even if the id is not available
<mbremer> I guess, I wonder it could then be nice to have connect_to be a future, that returns once the id is available.
<hkaiser> it does that internally...
<mbremer> Oh, hmm. I thought it was returned void atm.
<hkaiser> agas::on_symbol_namespace_event returns a future<id_type>
<mbremer> Ah interesting. And this is blocking correct?
<hkaiser> no, this is not blocking, it returns a future
<hkaiser> the blocking part is when an action is being invoked using a client that is not ready
<mbremer> But then the move assignment to `*this=...` doesn't block?
<hkaiser> c.get_id() will block
<hkaiser> no, the move assignment will not block
<hkaiser> that link points to the function that should be changed to not block, I'll try to remember
<mbremer> I can open an issue for you, if you'd like
<hkaiser> mbremer: yes, please
<hkaiser> mbremer: hold on
<hkaiser> looks like it's already implemented :-P (https://github.com/STEllAR-GROUP/hpx/blob/master/hpx/lcos/async.hpp#L102-L113)
<hkaiser> so no blocking for you
<mbremer> How nice :-)
<hkaiser> the first one assumes that the clinet is ready, so it is called from elsewhere
<hkaiser> yah, the first is called as the continuation for the second one
<mbremer> I see so the future will be ready anyway
<hkaiser> yes
<mbremer> My second question, has to do with how HPX interplays with the c++11 memory model.
<mbremer> I was thinking of writing a type of double buffer to replace the channels.
<hkaiser> yah, our channels could be more efficient :/
<mbremer> I was thinking that we could use constraints applied by the task graph to simplify the implementation. Basically only two messages can ever be in flight between two stencil tiles
<hkaiser> ok
<hkaiser> channel has an implementation allowing for one result in flight
<hkaiser> why not add a specialization for two?
<mbremer> Yeah, I could.
<mbremer> Well what I was curious about was if I could skip the mutexes.
<hkaiser> currently we have local channels and remote ones
<hkaiser> the remote channel always uses the unlimited_buffered_channel implementation
<mbremer> Since the stencil calls get synchronized anyway with a when_all call.
<hkaiser> k
<mbremer> Is that enough synchronization to guarantee no race conditions occuring?
<hkaiser> sounds like a possible optimization, a fixed-size-buffer channel
<hkaiser> shrug, I don't know
<mbremer> Yeah, but I think you definitely need a bidirectional stencil.
<hkaiser> ok
<mbremer> So like roughly, if you have two components, it roughly relies on the fact that for the i-th timestep, i+2th message couldn't be sent till the ith message has been processed
<hkaiser> sounds correct to me
<mbremer> And so with two buffers, and if you get sufficient thread synchronization in the when_all, you could guarantee with two buffers that threads would only exclusively be writing or reading from the vector at the recipient
<hkaiser> mbremer: you could easily protect your internal buffer with a HPX mutex
<hkaiser> that shouldn't be too much of an overhead
<hkaiser> but not using a map<> to store the messages is definitely beneficial
<mbremer> kk, I guess that is the answer to the more important question
anushi_ has joined #ste||ar
<hkaiser> mbremer: https://www.boost.org/doc/libs/1_67_0/doc/html/circular_buffer.html should be a nice data structure
<hkaiser> you tell it at constructon time now many slots it has
<mbremer> This is exactly what I was thinking of
<hkaiser> Go allows to specify the length of the pipeline for channels
<hkaiser> I was too lazy to implement that ;)
<hkaiser> but here is you chance to become famous ;-)
Anushi1998 has quit [Ping timeout: 245 seconds]
parsa has joined #ste||ar
<mbremer> I would be interested in trying to implement that
<hkaiser> by all means
<hkaiser> mbremer: easy enough to add a constructor parameter to lcos::channel allowing to specify the used buffer length (defaulting to -1 or somesuch)
<hkaiser> then use different specializations of the local channel to implement it
<hkaiser> -1 -> unlimited, 1: one_element_channel, else: your new stuff
<mbremer> Yes, I see.
<mbremer> I'm looking at the local channel now
<mbremer> hkaiser: kk, let me think on it some. I really like this circular buffer data structure.
<mbremer> But thanks for all of your insight. That helps a lot (and makes me sleep better at night ;) )
<ct-clmsn> @hkaiser, where's a good place to start reading about the policies that ya'll implemented in executors?
<hkaiser> :D
<hkaiser> ct-clmsn : the code ;-)
<ct-clmsn> (for like numa domain allocations, etc)
<ct-clmsn> doah!
<ct-clmsn> lol
<ct-clmsn> fair enough!
<hkaiser> no docs yet, sorry
<ct-clmsn> it's cool
<ct-clmsn> was hoping for a little bit more of a general orientation (directory) to be sure my intuition isn't too far off
<hkaiser> ct-clmsn: we know it's too much in flux to spent time to document things
<ct-clmsn> rgr
<hkaiser> ct-clmsn: well, I can point you to the directories
<ct-clmsn> it's all in runtime?
<ct-clmsn> ah ok it's all in hpx-compute
<hkaiser> nod
<ct-clmsn> yeah, i was *waaay* off
<ct-clmsn> thanks
<hkaiser> ct-clmsn: please ask if you get stuck
<ct-clmsn> np
<ct-clmsn> i've still some blaze things to wrap up
<ct-clmsn> just need to get some pointers out to possible collaborators in ofa
<hkaiser> ct-clmsn: I think the recent (pending) changes to phylanx will give us a speedup of a factor of two
<ct-clmsn> wow!
<ct-clmsn> so over numpy?
<hkaiser> yes
<hkaiser> more to come ;)
<ct-clmsn> these are non-phylanx?
<ct-clmsn> (hpx improvements?)
<hkaiser> LRA was on par with non-phylanx/numpy, now it's twice as fast
<ct-clmsn> very nice
<hkaiser> a bit of hpx, mostly phylanx
<ct-clmsn> is it time for more algorithms?
<hkaiser> ct-clmsn: absolutely!
<hkaiser> k-means is in the works already
<ct-clmsn> @hkaiser, excellent...will put more into that during work hours
<ct-clmsn> there are a couple of tensor algorithms that might be a nice extension of the current primitives
<hkaiser> ok
<hkaiser> ct-clmsn: not sure yet how to represent tensors, though
anushi_ has quit [Ping timeout: 268 seconds]
<ct-clmsn> @hkaiser, a couple of licensing issues - will need to hit up the implementers (they're work is all GPL)
<ct-clmsn> their
<ct-clmsn> gah
<hkaiser> nod, GPL is evil
<ct-clmsn> @hkaiser, these folks use numpy arrays
<hkaiser> k
<hkaiser> and numpy algorithms
<ct-clmsn> bbiab
<ct-clmsn> rgr
<hkaiser> ttyl
nikunj has quit [Ping timeout: 260 seconds]
parsa has quit [Quit: Zzzzzzzzzzzz]
<ct-clmsn> wasn't that long
EverYoung has joined #ste||ar
EverYoung has quit [Remote host closed the connection]
eschnett has joined #ste||ar
parsa has joined #ste||ar
nikunj has joined #ste||ar
EverYoung has joined #ste||ar
EverYoung has quit [Ping timeout: 276 seconds]
mbremer has quit [Quit: Page closed]
parsa has quit [Quit: Zzzzzzzzzzzz]
quaz0r has joined #ste||ar
ct-clmsn has quit [Read error: Connection reset by peer]
ct-clmsn has joined #ste||ar
ct-clmsn is now known as Guest59659
Anushi1998 has joined #ste||ar
parsa has joined #ste||ar
Guest59659 is now known as ct-clmsn__
ct-clmsn__ has quit [Quit: Leaving]
Anushi1998 has quit [Remote host closed the connection]
Anushi1998 has joined #ste||ar
<jbjnr> K-ballo: yt?
<K-ballo> partially
<jbjnr> any idea why this does not compile. I don't see why it can't resolve the template param https://gist.github.com/biddisco/1ea32940d68479d9bd307b393680c552
<jbjnr> it's a minimal example, not to be considered useful
<K-ballo> not a deducible context, having both uses of Args
<K-ballo> I assume you want to deduce just from `fn`?
<jbjnr> yes
<K-ballo> well apparently you can, I'm mildly surprised
<K-ballo> can't, sorry
<jbjnr> hmmm
<K-ballo> could you deduce them separately, Params... and Args..., and ignore the int?
<K-ballo> or else add a second step
<jbjnr> not sure
<jbjnr> I couldn't understand why it didn't work, so didn't try very hard to find another way
<K-ballo> something like this, to keep things simple: https://wandbox.org/permlink/bH2cLnOVsEi53rKh
<K-ballo> a pack is only deducible when it is trailing
<K-ballo> (int, Args...) fine, (Args..., int) not fine
parsa has quit [Quit: Zzzzzzzzzzzz]
<jbjnr> ok. the trailing part I was suspecting might be a problem
<jbjnr> I guess your solution will work. thanks. I will try it
<jbjnr> PS. how fo you generate those wandbox permalinks. I had a wandbox example, but had ti paste it into gist to make the link
<K-ballo> the UI sucks.. there's a [Share] button right above the output after you run the code, and it turns into an URL link that you can copy
<jbjnr> aha. I see it. Thanks
<jbjnr> Thanks for the help. I wil go to bed and get this into my real code tomorrow. Cheers.
diehlpk has quit [Ping timeout: 240 seconds]
eschnett has quit [Quit: eschnett]
parsa has joined #ste||ar
<Anushi1998> hkaiser: Should we change hpx::cout to std::cout or link it with iostreams component? Which should be preferred?
<nikunj> hkaiser: talking about the issue: https://github.com/STEllAR-GROUP/hpx/issues/3290
<hkaiser> Anushi1998: I think we should link with our iostreams
<Anushi1998> Okay :)
<hkaiser> that's kinda the point of this example
<nikunj> yes
<Anushi1998> hkaiser: Can you please suggest why it was working with clang but not gcc?
parsa has quit [Read error: Connection reset by peer]
parsa| has joined #ste||ar
parsa| has quit [Quit: Zzzzzzzzzzzz]
<github> [hpx] NK-Nikunj opened pull request #3291: Fixes #3290 (master...fix-#3290) https://git.io/vpvcj
parsa has joined #ste||ar
EverYoung has joined #ste||ar
EverYoung has quit [Ping timeout: 245 seconds]
<zao> `fix-#3290` in a pull request branch name.. that might actually _break_ some of my scripts.
<zao> Hrm, maybe they're all on the form ref/NN/head, phew.
<nikunj> zao: I didn't realize that earlier
<nikunj> what sort of a naming convention should I use?
<nikunj> I'll use that from my next pr
<zao> Anything goes, really. Was just surprised to see the #.
<zao> They tend to be on the form you had, like fixing-3290 or fix-subject-thing, but it's all about _your_ workflow.
<nikunj> zao: Oh, then i'll drop the # from next time then
<nikunj> zao: true
<zao> It shouldn't really matter, but if you work yourself with them on the command line, # may be interpreted as a comment.
<nikunj> oh ya for that I use escape sequencing, you know tabs do wonders at times ;)
<nikunj> zao: I guess I'll drop them then.. thanks for telling me :)
nikunj has quit [Quit: Page closed]
Anushi1998 has quit [Quit: Bye]