hkaiser changed the topic of #ste||ar to: STE||AR: Systems Technology, Emergent Parallelism, and Algorithm Research | stellar.cct.lsu.edu | HPX: A cure for performance impaired parallel applications | github.com/STEllAR-GROUP/hpx | Buildbot: http://rostam.cct.lsu.edu/ | Log: http://irclog.cct.lsu.edu/ | GSoD: https://developers.google.com/season-of-docs/
K-ballo has quit [Quit: K-ballo]
nikunj has joined #ste||ar
hkaiser has quit [Ping timeout: 250 seconds]
nikunj has quit [Read error: Connection reset by peer]
nikunj97 has joined #ste||ar
nikunj97 has quit [Remote host closed the connection]
<heller> toying around with static reflection ;)
<heller> this is how distributed objects in C++ could look like in the future...
<heller> this is pretty cool ;)
mdiers_ has joined #ste||ar
mdiers_ has quit [Remote host closed the connection]
mdiers_ has joined #ste||ar
mdiers_ has quit [Client Quit]
simbergm has joined #ste||ar
rori has joined #ste||ar
heller has quit [Quit: http://quassel-irc.org - Chat comfortably. Anywhere.]
heller has joined #ste||ar
nikunj has joined #ste||ar
nikunj has quit [Remote host closed the connection]
K-ballo has joined #ste||ar
hkaiser has joined #ste||ar
<heller> hkaiser: the talk is as well
<hkaiser> nod
<heller> watching it right now...
simbergm1 has joined #ste||ar
<simbergm1> heller: hkaiser: is there a reason to keep invalid_thread_id around? I've made some changes for which I'd like to remove it but just want to check first it's not critical in some way
<simbergm1> it seems like a default constructed thread_id does the same job
<heller> it does
<hkaiser> simbergm1: it might not be needed after heller's execution_context branch was merged, not sure
<heller> invalid_thread_id is just more expressive in some sense
<hkaiser> simbergm1: why would you lie to remove it?
<heller> hkaiser: good point, any deep rationale why you renamed the namespace?
<simbergm1> hmm, ok, I'll give it a try
<hkaiser> heller: to separate this from the executors stuff (at least for now)
<simbergm1> thread_id is templated on the coroutines branch to separate thread_data out of that module
<hkaiser> like*
<hkaiser> ahh
<simbergm1> and not having to instantiate a global invalid_thread_id with the correct thread_data would make things simpler
<heller> thread_id_type and coroutines are closely related though
<hkaiser> simbergm: we could turn invalid_thread_id into a simple tag type
<simbergm1> other ideas welcome though
<simbergm1> thread_id_type stays in that module
<simbergm1> thread_data not
<hkaiser> nod
<simbergm1> so thread_id_type and coroutines will live together
<heller> ok
<heller> so invalid_thread_id is just a nullptr to thread_data
<hkaiser> heller: it was also conflicting on all ends with the other execution module,
<heller> hkaiser: I think I removed these conflicts
<simbergm1> right, that's why I figured it wasn't critical
<heller> but sure, I didn't have the time to fully clean it up yet...
<hkaiser> simbergm1: what about struct invalid_thread_id {}; ?
<simbergm1> hkaiser: yes, but then what?
<hkaiser> well use it?
<simbergm1> how do you instantiate a thread_id with that?
<heller> you don't you add a ctor and comparison functions
<hkaiser> define comparison operators/constructors for thread_id that can handle that
<simbergm1> ah, ok, I see
<simbergm1> wait, thread_id::thread_id(invalid_thread_id) does the same as thread_id::thread_id() i.e. sets the data pointer to nullptr?
<simbergm1> if I want to return an invalid_thread_id from a function returning thread_id?
<heller> yes
<heller> so the problem is that you don't want to pull in thread_data?
<simbergm1> ideally, yes
<simbergm1> it pulls in so much other stuff
<simbergm1> that would be nice to keep separate
<heller> i think the execution context stuff will simplify that significantly
<heller> i'll clean it up for good right now
<hkaiser> simbergm1: we want to introduce stack-less threads again, this would require to create an abstract base class for thread_data anyways
<hkaiser> that base class could stay in the coroutines module without pulling in anything else, would that help?
<simbergm1> right, the execution context stuff is a bit higher level though?
<simbergm1> hkaiser: yeah, I think so
<hkaiser> it is
<simbergm1> hmm, well, it would help a bit, but it would still require e.g. thread_description for the thread_data base class
<simbergm1> not the end of the world but slightly annoying
<simbergm1> for stackless threads, what about setting the stacksize to 0 like it was done before?
<heller> for stackless coroutines, i would imagine them to use their own execution context
<heller> which might not necessarily require to derive from thread_datat
<heller> i would see thread_data to directly belong to the coroutines module
<heller> and rather factor the dependencies out
<simbergm1> yeah, well, the dependencies have to go in any case, but I quite like keeping the coroutines implementations completely separate
<simbergm1> thread_data and friends already gets into priorities, queues etc.
<heller> well, once we have proper execution contexts, the queues and priorities would go away naturally, no?
<simbergm1> from the level of thread_data? maybe... still, isn't the execution context above all that? it exposes yield, suspend etc?
<simbergm1> I admit I'm not fully clear on where exactly execution_context fits in...
<heller> the execution context doesn't expose yield and suspend
<heller> an execution context is the context in which a execution agent is executed on. For this, the context uses certain resources
<simbergm1> ok, execution_agent
<heller> the execution agent is the one that yields/suspends etc
<heller> so thread_data is the execution agent
<heller> running on a execution context
<simbergm1> ok
<hkaiser> heller: btw, Bryce showed me an (independent) implementation of your agent_ref he created for his cppcon talk
<hkaiser> he called it boost_blocker (or similar, don't remember)
<hkaiser> while nobody would really understand this (most people don't know what 'boost-blocking' is), this name perfectly describes what agent_ref does
<heller> I don't think it is just a boost blocker
<hkaiser> what else is it?
<heller> the intention really was to add a type erased thing for an execution agent
<heller> the "boost_blocker" property only comes from the feature to be able yield and resume it, no?
<hkaiser> right
<heller> IIRC, "boost blocking" comes from all the forward progress stuff, right?
<hkaiser> yes
<hkaiser> makes sure that something can go ahead while something else is being 'blocked'
<heller> yes
<heller> so an execution agent will always implement some form of forward garantuee
<heller> and it needs an API to formulate it
tianyi93 has quit [Ping timeout: 264 seconds]
<heller> so yes, agent_ref is capable of implementing boost blocking
<heller> at least that would be my understanding
<hkaiser> heller: I'm not suggesting to rename what you proposed, just an observation
<heller> yeah, just adding my thoughts to it
<heller> noone except torvalds understands what boost blocking really means anyway ;)
<hkaiser> right ;-)
<heller> hkaiser: co_await is C++20 now?
<hkaiser> yes!
<heller> awesome!
<heller> I played around with andrew suttons reflection today ...
<hkaiser> that is cool as well
<hkaiser> nice!
<heller> game changer, really
<hkaiser> real wrappers!
<heller> to my understanding, everything except the injection part is targeted for C++23, right?
<hkaiser> well, I doubt it
<hkaiser> the reflection perhaps
<heller> right, that's everything except injection ;)
<hkaiser> k
<heller> and with that, we can have fully automatic serialization, at least
<heller> not the real wrappers yet
<hkaiser> if they really add reflection on lambdas to the mix, not sure if that's in
<hkaiser> and construction of lambda captures
<heller> lambdas are just anonymous structs, so why not
<hkaiser> well, sure
<heller> yeah, it will get hairy, for sure
<rori> would you have any reference link or paper on boost-blocking ?
<hkaiser> rori: uhh, there is one from a couple of years back
aserio has joined #ste||ar
<hkaiser> look for Torvald Riegel's papers
<rori> ok thanks !
<hkaiser> rori: wg21.link/p0679
hkaiser has quit [Ping timeout: 264 seconds]
simbergm has quit [Read error: Connection reset by peer]
<rori> tks!
<diehlpk_work> They search for minitutorials and we might want to apply for HPX using Python notebooks?
<heller> is this working in production?
hkaiser has joined #ste||ar
<diehlpk_work> heller, We know in December after my course.
<diehlpk_work> Some of my students use the notebooks to do the exercises
<hkaiser> heller: also, since you seem not to have time to work on serialization, can we go ahead now and apply any changes you might come up with later?
<diehlpk_work> If not we could do just to the c++ hpx tutorial
<heller> hkaiser: I still think it is bad to have a change in semantics for id_type serialization
<diehlpk_work> But I think doing this tutorial would help to advertise hpx within the applied mathematics community
<hkaiser> is there a change? not for the use case inside HPX, I believe
<heller> hkaiser: might not be a problem within HPX itself, true
<heller> gtg
<diehlpk_work> simbergm1, Rebecca will work on the stencil example as her next task
<diehlpk_work> I think since we point many people to this example, it would be nice to have the text improved there
<heller> hkaiser: I believe that the future handling is a necessary pre and post condition for this algorithm
<simbergm1> diehlpk_work: agreed, sounds good
<hkaiser> heller: what algorithm?
<heller> The gid splitting during serialization
<hkaiser> sure
<hkaiser> what's your point?
aserio1 has joined #ste||ar
<heller> That those are easily violated right now ;)
<hkaiser> heller: not inside HPX
<hkaiser> that's all I care about at this point
aserio has quit [Ping timeout: 252 seconds]
aserio1 is now known as aserio
<hkaiser> if somebody want to use HPX serialization outside of HPX they probably will not need credit splitting anyways
<heller> Sure
<heller> And since nothing is documented anyway...
<heller> The performance test is failing
<heller> When run in the test system
<heller> Command line handling issue...
aserio has quit [Ping timeout: 246 seconds]
<hkaiser> heller: no idea why it's failing, works locally
rori has quit [Quit: WeeChat 1.9.1]
simbergm has joined #ste||ar
aserio has joined #ste||ar
aserio has quit [Ping timeout: 250 seconds]
nikunj has joined #ste||ar
aserio has joined #ste||ar
aserio has quit [Ping timeout: 245 seconds]
maxwellr96 has quit [Ping timeout: 250 seconds]
aserio has joined #ste||ar
aserio1 has joined #ste||ar
aserio has quit [Ping timeout: 245 seconds]
aserio1 has quit [Ping timeout: 264 seconds]
hkaiser has quit [Ping timeout: 245 seconds]
aserio has joined #ste||ar
nikunj has quit [Remote host closed the connection]
diehlpk_work has quit [Remote host closed the connection]
hkaiser has joined #ste||ar
heller has quit [Quit: http://quassel-irc.org - Chat comfortably. Anywhere.]
heller has joined #ste||ar
aserio1 has joined #ste||ar
aserio has quit [Ping timeout: 245 seconds]
aserio1 is now known as aserio
aserio has quit [Quit: aserio]
Coldblackice has joined #ste||ar
Coldblackice has quit [Ping timeout: 240 seconds]