aserio changed the topic of #ste||ar to: STE||AR: Systems Technology, Emergent Parallelism, and Algorithm Research | stellar.cct.lsu.edu | HPX: A cure for performance impaired parallel applications | github.com/STEllAR-GROUP/hpx | Buildbot: http://rostam.cct.lsu.edu/ | Log: http://irclog.cct.lsu.edu/
<github> [hpx] hkaiser pushed 1 new commit to master: https://git.io/vHW4x
<github> hpx/master 59b11ef Hartmut Kaiser: Minor documentation formatting change
hkaiser has quit [Quit: bye]
eschnett has joined #ste||ar
K-ballo has quit [Quit: K-ballo]
akheir has quit [Remote host closed the connection]
zbyerly_ has joined #ste||ar
jgoncal has joined #ste||ar
jgoncal has quit [Client Quit]
jgoncal has joined #ste||ar
jgoncal has quit [Quit: jgoncal]
jgoncal has joined #ste||ar
jgoncal has quit [Client Quit]
patg has quit [Quit: See you later]
shoshijak has quit [Ping timeout: 240 seconds]
shoshijak has joined #ste||ar
shoshijak has quit [Ping timeout: 240 seconds]
<taeguk> jbjnr: I add benchmark codes of is_heap and is_heap_until to my PR. If you need it, see:
pree has joined #ste||ar
shoshijak has joined #ste||ar
hkaiser has joined #ste||ar
<jbjnr> taeguk thanks
<github> [hpx] hkaiser pushed 1 new commit to serialization_access_data: https://git.io/vHWXj
<github> hpx/serialization_access_data 9df8089 Hartmut Kaiser: Adding missing #includes
<jbjnr> hkaiser: you must be up very early - or in another time zone :)
<hkaiser> can't sleep... as usual
<jbjnr> sorry.
<jbjnr> We have custom schedulers running on custom thread pools :) - we rock!
<hkaiser> \o/
<hkaiser> shoshijak, jbjnr: you rock!
<shoshijak> :) :) :)
david_pfander has joined #ste||ar
shoshijak has quit [Ping timeout: 240 seconds]
shoshijak has joined #ste||ar
Matombo has joined #ste||ar
Matombo has quit [Remote host closed the connection]
Matombo has joined #ste||ar
bikineev has joined #ste||ar
bikineev has quit [Remote host closed the connection]
<jbjnr> hkaiser / heller_ : can you think of any benchmarks that we might run that would benefit from multiple thread pools. We will start with raffaele's matrix code, because it has communication problems, but if you are aware of other applications/tests/benchmarks that we can experiment that might show an imediate benefit, please say.
<jbjnr> I was wondering if tweaking octotiger to have a communication pool might be worth the effort....
<hkaiser> stream benchmark, use one pool per numa domain
<hkaiser> that prohibits stealing between numa domains
<heller_> yeah, essentially anything that has different application domains
<heller_> communication, agas, computation, or somesuch
<hkaiser> agas not so much, I guess
<heller_> why not?
<heller_> we could replace the background work completely that way
<hkaiser> you can't keep all of the agas data local to that extra pool, so what's the point
pree has quit [Ping timeout: 245 seconds]
<hkaiser> also agas does not create sufficient work warranting a separate pool
<jbjnr> the stream benchmark would make an interesting case to use for a simple example of use though
<hkaiser> right
<hkaiser> this would also force you to get the initial configuration API under control
<jbjnr> yes. we have a simple method now to say create_pool("numa1") and add all pus on one numa domain to it, and all pus on another domain to another etc etc
<jbjnr> actually, a 4 socket node, with 4 numa donmains, would be great, to demonstrate the simplcity of the api
<hkaiser> jbjnr: I'd suggest to reuse the command line syntax somehow
<hkaiser> pass a string describing the thread<-->pu allocation
<jbjnr> lol
<hkaiser> thread:1=pu:2
<hkaiser> why not?
<jbjnr> because the command line syntax requires the user to know too much about the node cinfig. for a stream benchmark, we just want to say "for_each(numa_domani), create a pool and go
<jbjnr> which is how we are implementing it
<jbjnr> then it runs just fine on any architecture, even KNL without the user needing any command line params
<hkaiser> well, then write: 'thread:0-8=node:0'
<jbjnr> what is 8?
<hkaiser> or thread:all=node:0
<jbjnr> we have 32 on some node, 72 on others, 8 on some, 4 on pothers
<hkaiser> jbjnr: whatever, what I meant is to eventually have a means of fine-tuning
<jbjnr> we'll add that stuff in version 2
<hkaiser> I know that at some point a person using the nich jbjnr will come and say - 'hey I need one core for communication, use the rest for everything else, that one core should be <this>'
<hkaiser> jbjnr: no objections to add this later, better prepapre the infrastructure now than later, though
<hkaiser> jbjnr: at least I'd ask you to use the same terminology as we use on the command line, i.e. something like pool p("node:0");
<jbjnr> rp.add_resource(rp.get_numa_domains().front().cores_.front().pus_, "mpi");
<hkaiser> WHAT?
<hkaiser> you're kidding, right?
<heller_> I am in favor of such a hierarchy
<jbjnr> get the first numa domain, give me the first core and the first pu on it
<jbjnr> entirely architecture independent
<jbjnr> no command line params
<hkaiser> what is 'node:0.core:3.pu:1' if not a hierarchy?
<heller_> why go with strings when you can handle it with C++ objects?
<jbjnr> and numa_domains.back()
<hkaiser> jbjnr: that's an aweful API
<heller_> you can easily build any string parsings on top of that, don't you?
<jbjnr> the last nume domain - if you do not know how many there are, how do you ask on the command line for it?
<hkaiser> heller_: that's my point - I don't care about the internal representation, I care about how things are exposed
<hkaiser> jbjnr: write node:last
<jbjnr> hkaiser: we'll clean up the syntax ... but the concept is good
<heller_> hkaiser: but do I have to go over parsing a string when I am able to express it in code?
<jbjnr> hkaiser: so when someone runs octotiger on a different machine, they have to learn all the details about the cpus first
<hkaiser> jbjnr: the internal representation might be good, but unacceptable as the eventual user facing API
<jbjnr> and for every machine tyou run it on, you need a different command line
<hkaiser> why?
<hkaiser> node:0 or node:first gives you the first
<jbjnr> I want 3 pools on a 3 core machine, running a scheduler X, and 4 pools running scheduler Y,
<hkaiser> node:-1 or node:last the last one
<jbjnr> etc etc
<jbjnr> command line nightmare
<hkaiser> don't tell me that this would be not a nightmare using your syntax
<heller_> I think those things are optional though
<jbjnr> this syntax allows the user to write code to setup the polls instead of making the user use the comand line
<hkaiser> what is optional? user friendlyness?
<heller_> not optional ... orthogonal
<hkaiser> jbjnr: I didn't say that you should use the command line to set things up, I said youshould consider using the command line syntax to do that
<hkaiser> that's different things
<jbjnr> aha. I see
<hkaiser> i.e. pool p("node:last")
<jbjnr> ok, no problem, we can add a string based setp easy peasy
<hkaiser> right, we can always extend the existing parser to be more flexible
<jbjnr> but I suspect most users would prefer iterators over cores/pus/domains etc
<jbjnr> much easier to make loops over domains etc
<heller_> right, like the changes I made to the stream benchmark
<hkaiser> no problem, but not as the main API, I hope
<heller_> the initial string based approach didn't work at all on different machines, right now, it runs correctly on all machines without additional messing of command line parameters
<jbjnr> +1
<hkaiser> heller_: you didn't try, also I did fix the parsing
<heller_> for(auto numa_domain: get_numa_domains()) pool p(num_domain);
<heller_> how would you express this with command line parsing?
<hkaiser> anyways, go ahead, we'll make things pretty later :/
<heller_> how would you express such a loop with a string?
<hkaiser> for (auto i : get_num_numa_domains()) pool p("node:" + to_string(i));
<jbjnr> ours works with
<jbjnr> for (auto i : get_num_numa_domains()) pool p[i]
<jbjnr> essentially
<hkaiser> how do I know that p(i) refers to a numa domain and not a pu or a core?
<jbjnr> sorry I meant for (auto i : get_num_numa_domains()) pool i[core...]
<heller_> get_numa_domains() would return a vector<numa_node>?
<jbjnr> vector of numa nodes, contains vectors of cores, contains vectors of pus
<heller_> hkaiser: using the type system instead of strings :P
<hkaiser> as I said I have no problem with this internal representation, what I would like to see is a string based representation of top of that
<jbjnr> I think hkaiser might prefer if we stick to bitmasks as string
<hkaiser> jbjnr: I don't - don't be obnoxious
<jbjnr> sorry.
<hkaiser> heller_: with such a string based api you could even generate hierachies of those types specific for each pool for later introspection
<jbjnr> we also will support the numa domain closest to the PCI bus that owns the network card or GPU, etc etc
<hkaiser> auto h = get_topology("node:0");
<hkaiser> jbjnr: sure
<jbjnr> ^^ we will add that
<jbjnr> but why do we need node + string"i" when we can just get node[i]?
<heller_> hkaiser: i think the string based API and the actual types used have a one to one relation, you need the actual hierarchy for the string based API, but you can have the hierarchy being built up without a parser upfront
<hkaiser> absolutely
<hkaiser> in the end we need a means to control things from the command line, so this will come in handy
<heller_> hmm
<heller_> not sure I agree
<heller_> the command line sets up the basic footprint of the application, right?
<hkaiser> so you don't think we need a means of controlling things through th ecommand line?
<heller_> in your application, you then want to build your pools depending on this footprint, no?
<hkaiser> shrug
<hkaiser> I don't want to recompile just to try different setups
<heller_> that was my understanding so far
<heller_> that's what I am saying
<hkaiser> so you need command line options
<heller_> look at the stream benchmark, no need to recompile to try different setups. It is already controllable with --hpx:bind
<hkaiser> isn't that what I'm saying?
<heller_> the thing you specify with --hpx:bind is different than how you setup your pools, i'd say
<hkaiser> what you can't do today is to tell your stream benchmark is how to split the pools
<hkaiser> hell you might want to try having 2 pools per numa domain
<heller_> sure
<hkaiser> heller_: again, I'm fine with the type hierarchy, that's a nice way to represent things
<hkaiser> heller_: I'd like to see to be able to use the existing string description syntax to generate that hierarchy
<jbjnr> hkaiser: if you want the user to decide how many pools and where they are from the command line, then the user can just add special command line options that suit their internal needs. Doing all the bind and setup that way will cause such headaches on all the differnt architectures possible.
<hkaiser> that's just one additional function after all
<hkaiser> jbjnr: that's exactly what I would like to avoid - the headache
<heller_> hkaiser: I don't think anyone opposed to having a parser on top of it. I am just arguing that we should get the underlying representation correct first
<hkaiser> jbjnr: I think we're not too far apart in what we would like to achieve
<hkaiser> heller_: sure, fine by me - read back ^^ - I said as much
<jbjnr> I think you misunderstand me. the user can add an options --pools=4 for their own application needs and then in the code, iteratate over that. Making it possible to say --hpx:bind="0:4nide:0-6:pool1, pool2=node:....." is a headache
<jbjnr> ^excuse typos
<hkaiser> heller_: I don't want to have to write code like jbjnr showed (rp.add_resource(rp.get_numa_domains().front().cores_.front().pus_, "mpi");)
<jbjnr> get_numa_domains()[0].cores_[0].pus[0]
<jbjnr> is the same, I just wrote it long hand cos it was version 1
<hkaiser> jbjnr: sure, fine - no objections, just give me the additional string api on top of that
<jbjnr> actually that one is domains[0].cores[0]
<jbjnr> all pu's on core zero
<hkaiser> heller_, jbjnr: you're thinking only of generating the hierarchy based on the existing hardware topology - I'd like to have the same hierarchy generated from a string representing part of the hardward topology
<jbjnr> hkaiser: please submit an issue requesting the string based api, once the initial PR is accepted. In a few weeks time :)
<hkaiser> jbjnr: I might not accept the PR without it ;)
<jbjnr> fork:HPX = true
<hkaiser> great - I'm finally seeing my retirement coming closer
<jbjnr> <sigh>
<heller_> hkaiser: this is what I am having problems with, it's hard to process a string, it's simpler to transform a tree of objects
<heller_> broad claim ;)
<hkaiser> heller_: the string processing is already in place
<hkaiser> heller_: and nobody will force you to use the string, use the hierarchy directly if you want
<heller_> yes
<jbjnr> and once you use the hierarchy, you'll never want the string api anyway. It's lame
<hkaiser> fine, call me old-fashioned
<hkaiser> and I know I'm a lame duck, so this is just fitting
<jbjnr> stop it
gentryx has quit [*.net *.split]
quaz0r has quit [*.net *.split]
<hkaiser> ;)
<jbjnr> in a few weeks time you might ba a gordon bell finalist!
<heller_> what I am saying: With a string based approach, you will more likely end up in a situation where you either have an application specifically designed for a specific platform, or the user needs to know it's machine and which pools the application, and the syntax.
<heller_> I am well aware of the fact that the same will happen with a type hierarchy based approach ;)
<hkaiser> heller_: not more than when relying on the type hierarchy
<hkaiser> you need indicies here and there
<heller_> sure, it all depends on the final user interface, which is not entirely clear either way ;)
<hkaiser> that's what I said initially: the stream benchmark will force jbjnr and shoshijak to get the user-facing API under control
<heller_> so here the question is, what we want there, i guess
<jbjnr> ... when I fork hpx, should I call it HPX-6, that sounds better than HPX-3 and 5 doesn't it? maybe HP-eXtra or something more catchy ....gosh, choosing names is harder than you think ... :)
<heller_> just iterating over the available NUMA domains is what we have already now, I guess the challenge is to build up partitions of different sets and find "closest" sets etc.
<shoshijak> heller_ what do you mean by "closest sets"?
<heller_> shoshijak: give me the CPUs which are closest to my GPU/NIC/Memory
<heller_> that is, those that talk to that component with the lowest latency
<jbjnr> we are considering that now
<shoshijak> I was hoping these kinds of queries would be already built in to hwloc (or almost)
<heller_> yes, you can implement them using hwloc
gentryx has joined #ste||ar
<hkaiser> heller_: thread:0=gpu:0, i.e. one thread close to gpu device 0
<heller_> hkaiser: sure. or: for(auto gpu: rp.gpus()) { gpu.cores() /*<-- this gives you all PUs which are "closest"*/; gpu.cores().front() /* gpu:0 */;
<hkaiser> sure
<heller_> as said, I think those are equivalent
<hkaiser> right
<heller_> ok
<hkaiser> heller_: although I wouldn't intermingle the resource_partitioner with the topology
<jbjnr> ¯\_(⊙︿⊙)_/¯
<heller_> good question
<heller_> who tells you which resources you have available?
<jbjnr> rp.get_toplogy() ....
<hkaiser> rp should accept the required topology elements, and expose the current bindings, but not expose the underlying hardware topology
<heller_> is this all constraint by the initial command line (--hpx:threads/--hpx:cores/--hpx:bind) or will the application have the full view of the entire system?
<hkaiser> heller_: depends on how shoshijak has implemented it
<jbjnr> hardl matters since hk is going to reject our
<jbjnr> PR when we do it
<hkaiser> lol
<shoshijak> the resource partitioner has the full view of the system in the current implementation
<hkaiser> good
<hkaiser> well, not quite
<jbjnr> currently, the rp sees all the system. if the user asks for threads=, cores=, bind= then the iterators exposed by the rp, will only contain those elements
<hkaiser> a command line option of -t4 should limit the rp
<jbjnr> in the final version
<hkaiser> jbjnr: ok, sounds good
<heller_> shoshijak: I would actuall argue that it only has the view of how the application has been initially configured ;)
<heller_> right
<heller_> shoshijak: I would actuall argue that it only has the view of how the application has been initially configured ;)
<hkaiser> just that I don't think the rp should be responsible for exposing the available hardware topology
<github> [hpx] StellarBot pushed 1 new commit to gh-pages: https://git.io/vHWAB
<github> hpx/gh-pages cf538c6 StellarBot: Updating docs
<heller_> same as if you request two localities per node via slurm (or similar)
<hkaiser> nod
<shoshijak> ... putting that on my list of things to modify ...
<jbjnr> hkaiser: you're right. the RP should setup pools etc, and the interface between it and the toplogy should be distinct.
<jbjnr> currently we expose the toplogy via the RP, but we should chage that
<heller_> seperation of concerns is a lie!
<hkaiser> cool
<hkaiser> I like lieing
<heller_> hkaiser: I know
<hkaiser> ;)
<heller_> confess: you alter ego is niall and you just hired an actor who gives al those talks etc
<hkaiser> ROFL
<hkaiser> that's close to an insult, actually ;)
<heller_> :P
<heller_> we should absolutely start using boost-lite. That shit is tight
<hkaiser> it even rhymes...
<hkaiser> who would have thought
<heller_> it's like a thing every C++ project can depend on, so that we can build decentralized C++ projects by relying on a single central piece
<heller_> that's the future! You are just too ignorant to see!
<heller_> ok, i should stop ...
<hkaiser> I am
<hkaiser> we should use that in compute instead (after implementing it ourselves)
<hkaiser> actually we have it implemented there already
quaz0r has joined #ste||ar
<heller_> hkaiser: good thinking
<heller_> hkaiser: jbjnr: btw, I updated the LF PP getting rid of my bogus changes...
<jbjnr> ok. will check later
<jbjnr> did you try the garbage memory tweak?
<heller_> not yet
<heller_> I am fully loaded with my EU project right now
<heller_> I am now at >90% openmp performance using runtime adaptivity and shit... yay
shoshijak has quit [Ping timeout: 240 seconds]
<heller_> hmm, should we attempt to replace our inspect tool with clang-tidy?
<heller_> we'd need to implement our own clang-tidy checks, I guess
bikineev has joined #ste||ar
<github> [hpx] sithhell pushed 1 new commit to master: https://git.io/vHlez
<github> hpx/master 6913ff6 Thomas Heller: Fixing performance regression
<heller_> fuck
<heller_> sorry
<github> [hpx] sithhell pushed 1 new commit to master: https://git.io/vHler
<github> hpx/master 0b4d2f7 Thomas Heller: Revert "Fixing performance regression"...
<heller_> :/
<heller_> I need a break
<hkaiser> heller_: increasing commit counts?
<heller_> hkaiser: primary goal since 5 years
<hkaiser> ...figures
bikineev has quit [Ping timeout: 246 seconds]
pree has joined #ste||ar
denis_blank has joined #ste||ar
pree has quit [Client Quit]
bikineev has joined #ste||ar
shoshijak has joined #ste||ar
K-ballo has joined #ste||ar
<hkaiser> denis_blank: see pm, pls
<hkaiser> jbjnr: btw, I still think that #2656 is too rigorous, but it's a gut feeling only
<heller_> I don't think the OS-Mutex will matter in the end, it doesn't show up when creating new threads either
<hkaiser> ok, it mattered when I initially implemented things
<hkaiser> but that's a while back, admittedly
<heller_> let's do some benchmarks
<hkaiser> I can see the current default to be too large, but benchmarking should give us a better idea
<heller_> I can see it being bad with very grained tasks
<heller_> but there you make bad behavior just worse
<heller_> jbjnr: how do you build on tave currently?
ajaivgeorge_ has joined #ste||ar
shoshijak has quit [Ping timeout: 240 seconds]
shoshijak has joined #ste||ar
<jbjnr> hkaiser: I'll just add a cmake setting so that I can use a low number and you can have a high one
<jbjnr> #2656 that is
<jbjnr> heller_: tave - can't remember, do you need someting in particular? I can look at my build tree ...
<heller_> jbjnr: just wondering... with the latest changes I get lots of warning furing cmake
<jbjnr> warnings about?
<heller_> static vs shared builds
Matombo has quit [Remote host closed the connection]
Matombo has joined #ste||ar
ajaivgeorge has joined #ste||ar
Matombo has quit [Remote host closed the connection]
Matombo has joined #ste||ar
ajaivgeorge_ has quit [Ping timeout: 240 seconds]
bikineev has quit [Ping timeout: 260 seconds]
<hkaiser> jbjnr: can't that number be controlled at runtime as well?
<hkaiser> jbjnr: but sure, let
<hkaiser> 's have a cmake cfg setting defaulting to 100 or so
<jbjnr> either a cmake setting or -Ihpx.thread.cleanup.limit=100
<hkaiser> yah, but we have a setting for this, I believe
<hkaiser> jbjnr: hpx.thread_queue.max_delete_count
<jbjnr> "The value of this property defines the number number of terminated HPX threads to discard during each invocation of the corresponding function. "
<jbjnr> is that what it should say?
<hkaiser> uhh, let me have a look
<hkaiser> jbjnr: you're right, we don't have a runtime setting for this - sorry
<jbjnr> ok
hkaiser has quit [Quit: bye]
bikineev has joined #ste||ar
bikineev has quit [Ping timeout: 240 seconds]
quaz0r has quit [Ping timeout: 260 seconds]
hkaiser has joined #ste||ar
ajaivgeorge_ has joined #ste||ar
ajaivgeorge has quit [Ping timeout: 246 seconds]
Matombo has quit [Remote host closed the connection]
eschnett has quit [Quit: eschnett]
shoshijak has quit [Ping timeout: 240 seconds]
akheir has joined #ste||ar
<github> [hpx] hkaiser created uninitialized_move (+1 new commit): https://git.io/vHlrt
<github> hpx/uninitialized_move d15ade8 Hartmut Kaiser: Adding uninitialized_move and uninitialized_move_n
<github> [hpx] hkaiser force-pushed uninitialized_move from d15ade8 to 9fc8837: https://git.io/vHlrs
<github> hpx/uninitialized_move 9fc8837 Hartmut Kaiser: Adding uninitialized_move and uninitialized_move_n
ajaivgeorge has joined #ste||ar
quaz0r has joined #ste||ar
ajaivgeorge_ has quit [Ping timeout: 245 seconds]
<hkaiser> denis_blank: I'm ready whenever you are to talk over skype
<denis_blank> Is there a boost::hana::unpack like utility function in the codebase already, which can pass the content of a tuple to a given callable object?
<hkaiser> denis_blank: invoke_fused
pree has joined #ste||ar
<denis_blank> hkaiser: Thanks
<K-ballo> denis_blank: aren't you the guy behind the invoke_fused documentation PR?
<K-ballo> who is Naios ?
<denis_blank> Yes that's me, actually I wanted to say that I should have known this because of my PR
aserio has joined #ste||ar
david_pfander has quit [Ping timeout: 240 seconds]
eschnett has joined #ste||ar
pree_ has joined #ste||ar
pree has quit [Ping timeout: 260 seconds]
<K-ballo> boost 1.64 doesn't work with /std:c++latest :'(
<K-ballo> it's hitting all those removed stdlib features STL pointed to years ago
EverYoung has joined #ste||ar
EverYoung has quit [Remote host closed the connection]
EverYoung has joined #ste||ar
<hkaiser> denis_blank: now?
<hkaiser> K-ballo: yah you need to define that macro
EverYoun_ has joined #ste||ar
<hkaiser> _HAS_AUTO_PTR_ETC=1
EverYoun_ has quit [Remote host closed the connection]
EverYoung has quit [Ping timeout: 272 seconds]
EverYoung has joined #ste||ar
atrantan has joined #ste||ar
<atrantan> heller_, yt?
EverYoun_ has joined #ste||ar
EverYoung has quit [Remote host closed the connection]
<K-ballo> hkaiser: heller_, wash: how unreasonable would it be for HPX to banish deferred futures?
<K-ballo> they only exist to allow that bounded pool in std::async, after all
<hkaiser> K-ballo: I have never used them
<K-ballo> of course not, we don't do the standard's async dance :)
<hkaiser> K-ballo: otoh, I could see for them to be useful for delaying calculating things until absolutely necessary
<K-ballo> yes, but that doesn't play nice with continuations
<hkaiser> like give me the next natural number - something you wouldn't like to do eagerly
<K-ballo> lazy evaluation get eagerly evaluated when continuations are involved
<hkaiser> right
<hkaiser> what did you have in mind?
<K-ballo> to outright banish them
<hkaiser> ok, let's hear what the others say
<hkaiser> I'm impartial
* K-ballo fears it's unreasonable
<hkaiser> why is it unreasonable?
<K-ballo> because it's part of the standard interface :/ even if a defective one
<hkaiser> do we actually have to guarantee lazy execution?
<K-ballo> if the policy is deferred, then yes, it's required to not execute unless blocking
<hkaiser> ok, what does the concurrency ts say about deferred?
<K-ballo> heh, whatever I made it say... it completely ignored before then
<K-ballo> let me see...
<K-ballo> wow, not even that... there's no hits for "defer"
<hkaiser> so there is your backdoor ;)
<hkaiser> also, how does Vicente handle this in boost:;future
<K-ballo> last I check it would terminate :P
<K-ballo> call a pure abstract virtual or something
<hkaiser> on purpose?
<hkaiser> uggh
<K-ballo> I doubt it was on purpose
<K-ballo> but that was years ago
ijimenez_ has joined #ste||ar
EverYoung has joined #ste||ar
EverYoun_ has quit [Ping timeout: 272 seconds]
<heller_> K-ballo: never used it either and I'm not aware of any non constructed use case.
<heller_> In the case of natural numbers (or along that line), generators as defined in the coroutine TS are far better suited
<K-ballo> I'd be honestly surprised if there are any genuine use cases with HPX, deferred futures were introduced to allow a bounded pool on `std::async` so that if all the threads were busy a deferred future would be returned instead (though nobody implements it this way either, ms used to have the bounded pool they wanted but couldn't have deferred semantics)
aserio has quit [Ping timeout: 246 seconds]
<pree_> Hi all. Is moving a future to other thread and executing it there have any performance defectiveness ?
pree_ has quit [Quit: AaBbCc]
ajaivgeorge has quit [Quit: ajaivgeorge]
<heller_> A future doesn't execute anything
<heller_> It's the receiving end of the thing to be produced
denis_blank has quit [Quit: denis_blank]
akheir has quit [Remote host closed the connection]
<atrantan> lazy evaluation can be useful for data reuse
<atrantan> and cache purpose
bikineev has joined #ste||ar
bikineev has quit [Remote host closed the connection]
aserio has joined #ste||ar
ijimenez_ has quit [Quit: Connection closed for inactivity]
bikineev has joined #ste||ar
<github> [hpx] K-ballo created compat-exception (+2 new commits): https://git.io/vH8gE
<github> hpx/compat-exception a7aa788 Agustin K-ballo Berge: Add compatibility layer for std::exception_ptr
<github> hpx/compat-exception 88ddf7a Agustin K-ballo Berge: Add inspect checks for deprecated boost::exception_ptr
eschnett has quit [Quit: eschnett]
<heller_> atrantan: that's also what continuations​ do
<heller_> K-ballo: is there any reason to not switch to std::exception_ptr directly?
<zbyerly> aserio, what's the name of the Workday purchasing thing?
<K-ballo> it's unsupported in certain combinations, like libc++ for msvc or mingw
<heller_> I'm not sure about those intermediate compat layer
<aserio> zbyerly: procurement I think
<K-ballo> why not? they are mostly just alises
<heller_> Ok, too bad, is that something we care about?
<zbyerly> aserio, i just now went to purchasing -> connect to supplier website
<heller_> Sure, it's disrupting users twoce
<heller_> Twice
<K-ballo> uh, how?
<zbyerly> now i have a bunch of options, CDW, Office Depot, etc.
<K-ballo> the point is to switch *without* disrupting any users
<aserio> zbyerly: yea
<heller_> First, their code fails because the boost version isn't defined anymore, the next time because we removed the compat namespace
<aserio> zbyerly: I thought you were going to talk to Frank first
<heller_> See the hpxcl discussion yesterday
<zbyerly> aserio, oh yeah that's right
<K-ballo> if their code fails because boost that's their doing?
<heller_> K-ballo: I might see ghosts...
<heller_> K-ballo: well it failed because the implied include vanished.
<K-ballo> if they rely on implied includes that's entirely their doing
<heller_> Yes
<K-ballo> we could never drop any, particularly with the huge amount HPX leaks
<heller_> It's still disruptive in a sense
<K-ballo> yes, it is, but not because they've failed to include the things they use
<heller_> Of course, as said, not sure if I'm exaggerating here...
<K-ballo> but because some interfaces change and they have to adjust for std::
<zao> It took me a good while to understand the subtleties of the standard wording "headers may include other headers".
<zao> subtext being "include what you use or die trying"
<K-ballo> one has to assume that headers include every other header, and at the same time include no other header, even those which are obvious requirements
<heller_> Pain in the ass...
<K-ballo> it's only a pain in the ass because you are allowed to be careless and sloppy
<K-ballo> once modules don't let you do that anymore, you wouldn't even notice
<heller_> Yup...
<zao> Once it's not a problem in C++ anymore, you're not using C++ anymore.
<K-ballo> anyways, I'd like to get back to the _twice_ part, users are not expected to use compat:: unless they target a platform where it is not supported
<heller_> Just wanted to ponder if it doesn't make sense to go std:: right away and drop the non-compliant platforms, temporarily
aserio has quit [Quit: aserio]
<K-ballo> maybe, possibly, but would it be any different for users?
<heller_> No
<heller_> As you said, the problem comes mostly from implicitly defined names
<heller_> In that scenario, the disruption would happen once only...
<heller_> Also, what would you advise to users?
<K-ballo> these are not things in user facing interfaces, so possibly nothing
<heller_> Yes, but let's assume a users uses boost::exception_ptr without including it
<K-ballo> to get their includes in order?
<heller_> That code fails now. What would you advise?
<K-ballo> to get their includes in order
<heller_> And at link libraries, potentially
<K-ballo> if they need them, definitively
hkaiser has quit [Quit: bye]
<heller_> Ok, good, let's do it that way then
<heller_> Nothing we can do anyway
<K-ballo> if we stop using #include <string> in a header and their code breaks, what would you advise them?
<heller_> Sure, same thing
<heller_> It's probably a good thing in the end
<K-ballo> nod, we want to cut as many leaked includes as we can *before* we get to play with modules
<K-ballo> though to be honest I imagine that playing with modules for HPX will fail horribly
<K-ballo> it's all one single huge module
<heller_> Yes :(
<heller_> Might be a good motivation to start cleaning up...
<heller_> One part at a time...
<heller_> I think our coroutines are one of the libraries at the lowest layer. I'd think, logically, all the error handling as well
<heller_> At least the infrastructure...
<K-ballo> nod, coroutines is a candidate for a 'core' module
<K-ballo> along with support, some traits, some utils, error handling
denis_blank has joined #ste||ar
EverYoun_ has joined #ste||ar
atrantan has quit [Quit: Quitte]
EverYoung has quit [Ping timeout: 255 seconds]
<jbjnr> is it possible to parse argc,argv twice using program options. When I try it, I lose all the hpx:xxx options on the second parse (but not my user defined ones)
<jbjnr> I want to get some options in int main (before hpx main), but still allow hpx to get the ones it needs as usual
<thundergroudon[m> what is the alternative to apt-get on rostam?
hkaiser has joined #ste||ar
Matombo has joined #ste||ar
<thundergroudon[m> I want to locally install some package for testing
<thundergroudon[m> I know I can git clone and make build
<thundergroudon[m> but is there a simpler way?
<hkaiser> what platform?
<zao> See if there's a module for it, assuming they have a module system. Otherwise, configure-make-pray.
<thundergroudon[m> Rostam
<zao> Alternatively, annoy the sysapes into installing it for you :D
<thundergroudon[m> haha
<thundergroudon[m> but just for testing
<hkaiser> thundergroudon[m: Rostam has everything except hpx itself installed, should be a nobrainer
<thundergroudon[m> yes
<thundergroudon[m> hkaiser: main ones are installed which I can call using 'module'
<thundergroudon[m> but if I want to locally install something to test
<hkaiser> module load cmake<something> boost<something> gcc<something-matching-with-boost>
<thundergroudon[m> like libdev-png
<zao> I doubt they have spack or EasyBuild.
<hkaiser> git clone hpx; mkdir build; cd build; cmake ../hpx
<hkaiser> done
<zao> So either hand-build, or grab all the package and untar/uncpio/unblargh.
<zao> hkaiser: Rumor has it that there are other things in this world one might build apart from HPX :D
<thundergroudon[m> hkaiser: Yes! That part was straighforward, haha!
<hkaiser> zao: unheard off!
<thundergroudon[m> zao: lol! ;)
<zao> Of course, any sane person spends all CPU cycles in the world on building HPX.
<hkaiser> indeed!
<zao> When we've gotten ahead of our backlog of software, I want to try building HPX with EasyBuild.
<zao> Lots of weird dependencies and build flags.
<zao> Currently fighting Keras and StarPU.
<zao> (we as in my site)
<jbjnr> hkaiser: (asked a few minutes ago before you joined irc) is it possible to parse argc,argv twice using program options. When I try it, I lose all the hpx:xxx options on the second parse (but not my user defined ones)
<jbjnr> I want to get some options in int main (before hpx main), but still allow hpx to get the ones it needs as usual
<zao> jbjnr: In the context of an end-user program, of being a parcel port, or some other part of HPX?
<hkaiser> program options does not consume argc,argv, does it?
<jbjnr> ?
<jbjnr> zao: didn't understand your question
<hkaiser> just use argc,argv twice, once for parsing and once to hand it to hpx::init
<jbjnr> hkaiser: no, but it seems like the program options has some state that does not get cleared maybe
<hkaiser> jbjnr: show me the code
<jbjnr> hold one
<jbjnr> on^
<hkaiser> jbjnr: and what happens there?
<jbjnr> hpx::init now does not recognize hpx::help hpx:threads etc etc
<jbjnr> seems like program options somehow stores atate internally or something
<hkaiser> ok, sec
<hkaiser> no it has no internal state
<hkaiser> jbjnr: try to write it as: store(command_line_parser(argc, argv).options(desc_cmdline2).allow_unregistered().run(), vm2)
<hkaiser> but it should reallz work as zou have it Ö-
<hkaiser> but it should really work as you have it :/
<hkaiser> darn keyboard layout options
<zao> Silly kezboards./
<hkaiser> zah sillz
<zao> (I played through all of Gothic 1 as "Yao")
<hkaiser> heh
<hkaiser> that's real dedication!
<zao> Didn't notice until someone in the game addressed me by name, and then it was too cumbersome to restart :)
<jbjnr> hkaiser: that fixes it. Thanks very much - how the flipping hell could you possibly have known that fix
<jbjnr> = what does the allow_unregitered etc do?
<jbjnr> aha I see
<jbjnr> those options haven't been added yet
<jbjnr> cancel my understanding comment - it's nonsense
<hkaiser> it allows for the first parse to ignore the hpx options
<jbjnr> yes. ok. got it.
<jbjnr> thank you very much.
<hkaiser> most welcome
<jbjnr> lovely. now I can just reuse the same desc var and not make a copy etc etc
eschnett has joined #ste||ar
EverYoun_ has quit [Remote host closed the connection]
EverYoung has joined #ste||ar
hkaiser has quit [Quit: bye]
eschnett has quit [Quit: eschnett]
bikineev has quit [Remote host closed the connection]
bikineev has joined #ste||ar
<github> [hpx] K-ballo force-pushed compat-exception from 88ddf7a to d778392: https://git.io/vH8FM
<github> hpx/compat-exception 65b0012 Agustin K-ballo Berge: Add compatibility layer for std::exception_ptr
<github> hpx/compat-exception d778392 Agustin K-ballo Berge: Add inspect checks for deprecated boost::exception_ptr
eschnett has joined #ste||ar
hkaiser has joined #ste||ar
eschnett has quit [Quit: eschnett]
EverYoun_ has joined #ste||ar
EverYoung has quit [Ping timeout: 260 seconds]
EverYoun_ has quit [Ping timeout: 245 seconds]
<github> [hpx] hkaiser closed pull request #2611: Add documentation to invoke_fused and friends NFC. (master...fused) https://git.io/v9VSB
Matombo has quit [Ping timeout: 245 seconds]
eschnett has joined #ste||ar
bikineev has quit [Remote host closed the connection]
denis_blank has quit [Quit: denis_blank]
EverYoung has joined #ste||ar
EverYoung has quit [Remote host closed the connection]
EverYoung has joined #ste||ar
EverYoung has quit [Remote host closed the connection]
EverYoung has joined #ste||ar
EverYoung has quit [Remote host closed the connection]
EverYoung has joined #ste||ar
eschnett has quit [Quit: eschnett]