hkaiser changed the topic of #ste||ar to: STE||AR: Systems Technology, Emergent Parallelism, and Algorithm Research | stellar.cct.lsu.edu | HPX: A cure for performance impaired parallel applications | github.com/STEllAR-GROUP/hpx | Buildbot: http://rostam.cct.lsu.edu/ | Log: http://irclog.cct.lsu.edu/
jaafar has quit [Ping timeout: 250 seconds]
hkaiser has quit [Quit: bye]
bibek has quit [Remote host closed the connection]
bibek has joined #ste||ar
jaafar has joined #ste||ar
jaafar has quit [Quit: Konversation terminated!]
david_pfander has joined #ste||ar
<zao> If you're bored at work, live presentations from the EasyBuild meeting I'm at - https://www.youtube.com/channel/UCqPyXwACj3sjtOho7m4haVA
<zao> jbjnr_: Do you know any of the CSCS people that are here in Belgium, or is your site too large? :)
<jbjnr_> zao: I have no idea if anyone is there. We used easybuild at CSCS though I've don't. the one time I tried using it it seemed like a lot of work to just duplicate the scripts I already had in a way that made them more painful use
<jbjnr_> ^grrr. messed up that sentence.
<zao> Guilherme Peretti-Pezzi, Luca Marsella, Victor Holanda Rusu.
<zao> jbjnr_: It has a bit of a barrier to entry, heh.
<zao> There's a Spack lad here too :)
<jbjnr_> I'm told spack is much better
<zao> I should try it some day, see if my build machine explodes.
Yorlik has joined #ste||ar
<Yorlik> Hello!
<Yorlik> Would hpx be suitable for the development of an MMO type game with a distributed server architecture by a group of advanced hobbyists? Because that's what we're trying to do (some of us are professionals or CS students) and we are currently resarching computing models, especially task/fiber based ones for the purpose.
<Yorlik> If I understood correctly hpx is doing exactly that and in a distributed way.
<zao> Which parts of the game are you looking to use HPX for?
<zao> Assorted backend services, simulation server, game client?
<zao> HPX may have a bit of a bias to cluster-type programs that set up the localities up-front. I'm uncertain how well it copes with systems that dynamically grow/shrink over time.
<Yorlik> Simulation server
<Yorlik> We want to use a Lua scripting engine for the game logic on the server
<Yorlik> The client will be a thin shared library as plugin for the Unreal Engine
<zao> I know that my buddy JoeH over at Bioware/EA open-sourced their Orbit software they use to run a lot of the backend services for their games. You may want to see if that fits some role too - https://github.com/orbit/orbit
<zao> It's icky Java, of course :)
<Yorlik> So - the core client-server is all homebrewn
<Yorlik> One of us is a java guy actually
<zao> For a game server, HPX could be a contender.
<Yorlik> We looked into spatial OS, which is awesome, but expensive on the long run for a hobbyist project
<Yorlik> They seemed to have implemented a lot of what we designed for use before
<Yorlik> We ultimately decided to write the server on our own
<Yorlik> Even down to the networking based on UDP
<Yorlik> I initially follwed ideas of a task based system using coroutines/fibers which would get swapped out on IO and in between even.
<Yorlik> I wrote a tiny Lua based OS actually which worked nicely
<Yorlik> So what we want to do is to taskify the entire server
<Yorlik> Also the Lua programming model will follow a strictly task and event based paradigm.
<Yorlik> tasks in Lua will be just event handlers
<Yorlik> No dynamic scripts attached to objects or anything - just a definition of eventhandlers per gameobject
<Yorlik> But for the ehavy lifting, like pathfinding, AI and physics we also want to use a task based approach with ECSs at the löower level
<Yorlik> We believe a multothreaded ECS can be done when following certain rules and using buffering
<Yorlik> And yesterday I somehow stumbled over hpx :)
<Yorlik> Extending the programming model used on the C++ side to Lua scripting should be possible. (The Lua engine would just be a special case of it)
<Yorlik> Our world map will be split into a virtual grid of tiles we want to move dynamically between nodes/machines.
<Yorlik> Ultimately we might dynamically add nodes from the cloud depending on load - but all that is ofc pretty far away - we are doing basic research in the moment.
<Yorlik> How production ready is hpc over all?
<Yorlik> (I know it's a pretty generic question - anyways)
<zao> The primary consumers of HPX tends to be HPC simulations.
<zao> Seems to mostly work for those, interface is reasonably stable.
<Yorlik> How does hpc handle io? Are there special mechanics to swap out fibers while they wait on an io request?
<Yorlik> And have specialized IO threads?
<zao> I don't know how well regular I/O like TCP or file mixes into an HPX application, most communication inside a HPX application tends to be via the parcelports and components, built on top of Asio's io_service IIRC.
<zao> I guess that you'd arrange for the I/O to end up readying a future which you'd chain onto.
* zao pokes K-ballo, heller, or someone else that actually knows how HPX works :)
<zao> (I mostly build it)
<heller_> there we go
<Yorlik> :)
<Yorlik> Hello!
<heller_> hi
<heller_> so ... one after another
<heller_> dynamically adding/removing nodes is certainly a thing
<heller_> we support it with our parcelport design
<Yorlik> Just a disclaimer: We're nuts. If you read up you'll see ;)
<heller_> welcome to the club
<Yorlik> OTOH we know we're nuts - that might turn into an avantage in the end.
<heller_> regarding I/O: yes
<heller_> we have dedicated I/O threads which blend nicely into the whole future stuff
<Yorlik> IO and its fallout is one of my horrors.
<heller_> those are based on ASIO, so you can do all kinds of nifty asynchronous stuff there
<Yorlik> I was pretty amazed by the future concept mixed with fibers.
<heller_> give it a go
<Yorlik> Does hpx do all the thread management? is it using thread pinning?
<heller_> yes
<Yorlik> Sweet.
<heller_> it is pretty configurable in that regard
<Yorlik> I think I'll have to build and try it. All what I read/heared so far sounds pretty amazing.
<zao> heller_: I know I keep asking this, but I keep forgetting the answer. Can I have a locality that is "the main thread" where I can run things that need such affinity?
<heller_> I don't get the question ;)
<zao> (cf. GUI frameworks and graphics APIs that need the "main thread" which already has a message pump in which I could poll for HPX things)
<Yorlik> Do you know how fast your coro switches are in general? Does hpx manage a fiber pool to run tasks?
<heller_> zao: ah yes, you can do that
<zao> I'll defer it until you're done with our buddy here, I'm in a meeting :)
<Yorlik> Thanks so far zao :)
<heller_> Yorlik: we are managing a pool of fibers, yes. At the moment, I think we are at a granularity of a few micro seconds, I think
<heller_> so if your tasks runs for a milli second, you are usually safe from overheads ;)
<Yorlik> I was playing with a very lightweight coro library in C which gave me roundtrips of 14 ns
<heller_> we identified a lot bottlenecks, so there's lots of room for improvement
<Yorlik> I was afraid of using more heavyweight solutions, but hpc looks very promising.
<heller_> ok, we are talking about different things ;)
<Yorlik> My guess is, the overall gain might be worth it
<heller_> 14 ns roundtrip for just switching the contexts is what we have as well
<Yorlik> Essentially the alternative is to write our own system - but I am really intrigued by the distributed nature of hpc
<zao> "HPX" :)
<Yorlik> Woops
<zao> (HPC is High Performance Computing)
<heller_> however, that's really just from one context to another and doesn't include anything like multi core scheduling etc.
<Yorlik> I know - too much coffe ;)
nikunj has quit [Ping timeout: 268 seconds]
<heller_> so yes: 14 ns is as fast as you can get when just looking at switching contexts. But as always, there's more to it ;)
<Yorlik> HPX might just be the thing we're looking for
<Yorlik> Yes - sure. Interestingly that lubrary went up to 50 ms on windows
<heller_> what's your target platform for the server?
<Yorlik> Linux
<Yorlik> But we crossdevelop from start
<heller_> sure, no problem there, just asking
<Yorlik> Because there will be shared code between client and server
<heller_> of course
<Yorlik> Essentially we want the client to simulate too, but the server crosschecks and stay authoritative
<Yorlik> Client side sim is only for speed and smoothness
<Yorlik> In case of cheats a client would rubberband back
<Yorlik> In the core the client is just like a dumb terminal with some grease to make it look faster
<heller_> sure
<Yorlik> How much sense would HPX make for the client side of things ?
<heller_> well, in any case: HPX is there to use, I suggest you give it a whirl instead of developing your own system. We are always open to contributors, if the need arises to fix some of our performance problems ;)
<Yorlik> :D
<Yorlik> I'll do compile - building boost löibraries rioght now
<Yorlik> Didn't find out yet where in your CMakeLists to set up the BOOST_ROOT
<heller_> well, it makes sense if you want to directly talk to the server with the HPX facilities, with that being said, you probably need some layer in between to make the connection secure
<heller_> you don't
<heller_> that's passed to cmake
<Yorlik> I'm happy to see encouragement to use HPX for the project we have in mind.
<heller_> either set it in the GUI or pass it as an argument
<Yorlik> OK
<Yorlik> I'm using Visual Studio CMake projects on Windows and SublimeText on Linux - I'll put it in the configsa
<Yorlik> Thanks a lot !
<Yorlik> I think i should do some test builds and playing around with it now.
<heller_> Yorlik: what's the name of your game? Do you have any resources so far?
<Yorlik> Not really
<Yorlik> Our group was working the last four years on a modding platform thats going south
<Yorlik> So we decided to abandon it and do an entire game from scratch
<Yorlik> We need to rework the website to the new situation before i can really give it out
<Yorlik> In the moment it's full of the old junk
<Yorlik> I think within the next few months we'll have it reworked. After all we are so much at the beginning, that tha basic reasearch and desgn is more important for us than ourbound communications.
<Yorlik> Sry - I see my typing is horrible...
<Yorlik> heller_: I'll be offline for a while now - but I'll come back here for feedback and probably questions too. I think I have to compile it and write some hello world now :)
* Yorlik waves and fades
<Yorlik> And thanks a lot ! :)
K-ballo has quit [Quit: K-ballo]
K-ballo has joined #ste||ar
hkaiser has joined #ste||ar
<zao> jbjnr_: Don't tell anyone, but I just used Spack to install HPX 1.0.0 on my toy box :)
<heller_> good good
<mdiers_> fyi: we use HPX in a singularity container
<zao> Correction - I tried using Spack to install HPX... it failed :D
<zao> Complains about missing tcmalloc and -DHPX_WITH_MALLOC=tcmalloc, while the command line explicitly specifies -DHPX_MALLOC=system
<zao> So the build is broken there, who do I blame? :)
<zao> (they should be saying -DHPX_WITH_MALLOC=system)
<hkaiser> zao: Christoph Junghans, I believe
<hkaiser> that needs an update anyways
<zao> Heh, broken in more ways, explodes on some Boost.Exception stuff.
<zao> Needs older Boost?
<K-ballo> needs older boost or newer hpx, yes
<K-ballo> peter was eager to implement an lwg issue of his that was not supposed to break any code
<Yorlik> How does HPX decide if a task (along with it's data) should go over the network to another node or not? Is there some way of planning based on parameters?
<jbjnr_> Yorlik: it goes over the network if you try to eecute it on a differnt locality
<jbjnr_> ^execute
<Yorlik> So the application controls where it runs?
<jbjnr_> hpx::async(locality, task, stuff)
<Yorlik> Nice.
<jbjnr_> yes, but ...
<jbjnr_> if you have some data you can giuve it a handle and then move it from one node to another then
<Yorlik> Means I could introduce planning depending on anticipated execution time / data transfer time and other metrics?
<jbjnr_> hpx::async(locality(where is my data), task,args)
<Yorlik> I wonder if AI planners could do that together with some A* graph search
<jbjnr_> so then the runtime will find the locality and do the right thing
jaafar has joined #ste||ar
<jbjnr_> yes, if you had a cost function for some task, you could use that to decide where to place it
<Yorlik> Nice!
<jbjnr_> you'd need to build a scheduler on top of the internal schedulers (in a sense)
<Yorlik> Surely not the first task I'd tackle ;)
<jbjnr_> correct. I wrote a test a long time ago that launched work on nodes and you could add nodes to the job interactively and have them receive work dynamically.
* Yorlik is getting wild fantasies about an elastic cluster using dynamic VM allocation in times of high load.
<jbjnr_> we need people to try that kind of thing because the library has not been used in anger for that kind of job and fixing the bugs found would be very useful
<Yorlik> I have a feeling it's worth trying it. HPX seems to solve so many problems for us - in the moment I think it's probably a strong contender for what we want to do.
aserio has joined #ste||ar
<hkaiser> nice
<Yorlik> The good thing is one of us is super interested in parallel and distributed computing and our application demands solving many problems coming with that..
<hkaiser> then you're at the right place ;-)
<Yorlik> And I'm not that keen on writing just another scheduler and low level system - I want to work on the application mainly.
<Yorlik> Just saw your talk on Futures for dependency managemant at cppcon - that got me hooked :)
<hkaiser> :D
<Yorlik> The future chaining thingy is really elegant.
<Yorlik> No new programming paradigms or anything - just "normal" modern c++
<hkaiser> Futurize All The Things!
<Yorlik> With some flavor.
<Yorlik> Yup
<Yorlik> Many small stops to climb a mountain.
<Yorlik> steps
<Yorlik> Also the core utilization graph made it really clear
<Yorlik> What I'm wondering is, how you could optimize for cache locality within this paradigm
<Yorlik> probably just put everytrhing in arrays and use parallel loops
<hkaiser> Yorlik: cache locality is almost orthogonal, make your tasks cache friendly
<Yorlik> Optimizing Entity Component Systems should work nicely with HPX.
<hkaiser> there is the issue of NUMA awareness however, which we have not fully under control yet
<Yorlik> I played with hwloc before I stumbled over hpx
<Yorlik> But we'll not really have big machines any time soon - more a commodity cluster of cheap machines.
<hkaiser> sure
<Yorlik> My next step is to get it compiled and have some helloworld run.
<hkaiser> keep us in the loop
<Yorlik> I will.
<hkaiser> simbergm: yt?
<simbergm> hkaiser: here
<Yorlik> We have been researching computing models and solutions for months now and this is the first time I have a really strong feeling I want to work with a third party product.
<hkaiser> simbergm: Patrick ran into a problem building HPX 1.2 with Boost 1.69 (as expected) while trying to get things ready for Fedora 30
<Yorlik> Seeing theres a ton of exception handling too makes it easier .
<hkaiser> The deadline for F30 is mid April, I believe, should we go with a patch or will we have HPX1.3 released by then?
<Yorlik> Which boost version should I use to compile 1.2?
<hkaiser> Yorlik: error reporting is 100% exception based
* Yorlik just compiled 1.69
<hkaiser> anything before 1.69
<Yorlik> OK
<hkaiser> Yorlik: or apply a small patch, sec
<Yorlik> Thanks for the heads up. I'll look into it.
<simbergm> hkaiser: hmm, good question, I was planning to do it sometime around that, but we can make sure it's done before (famous last words)
<hkaiser> simbergm: probably not a biggy as the patch is trivial
<simbergm> we can try, and things are looking bad we'll just go with 1.2 with the patch
<simbergm> *if things
<hkaiser> k
<K-ballo> how much work is a patch release?
<hkaiser> good question, we have never done this
<K-ballo> if it just a matter of a couple hours, I'd say lets issue the patch right now
<K-ballo> people will continue to try to use boost 1.69
<hkaiser> nod
<simbergm> not a lot of work I'd say, sounds like a good idea
<Yorlik> You're doing an HPX 1.2.1?
<hkaiser> Yorlik: we might
<aserio> simbergm, heller_, jbjnr_: Will you be joining us on the HPX call?
<simbergm> aserio: just connecting
<heller_> aserio: give me a second
<Yorlik> I'm just wondering if I should wait for it instead of compiling Boost 1.68 or applying the diff.
<Yorlik> The diff is small though.
<hkaiser> Yorlik: 1.2.1 would be 1.2 plus that patch ony anyways
<simbergm> Yorlik: compiling 1.68 or applying the diff (or using master) will be faster, might be a few days until we make a patch release
<Yorlik> Allright - thanks !
<Yorlik> I have an automated cmake script for boost anyways - I think I'll just let it run.
<Yorlik> I'm just getting a weird feeling hpx might just be a breakthrough for c++. Concurrency and parallelism becoming almost as easy as lua scripting. ;)
<Yorlik> I'm not a professional programmer after all
<Yorlik> This all looks really approachable to me.
<Yorlik> Hiding away all the dirty tricks and details.
<Yorlik> When we were discussing internally which language to use for the server Rust was also in the discussion. I'm happy I was pushing for modern C++ instead. As nice as Rust is, but to me it looked too much like handholding at a convenience cost I didn't want to pay, apart from it being still sortof exotic (Like how many people really use it.).
<Yorlik> With hpx it feels like a lot of help for dealing with concurrency and parallelism is coming back.
<Yorlik> I hope it turns out as nice as it looks to me in the moment.
<hkaiser> it is ;-)
<Yorlik> :)
<Yorlik> Its a bit like everyone is scared of parallelism and concurrency or horribly oblivious of the issues and finally someone put it all together nicely and encapsulated the guidelines you have to follow in nice standard conformant API. Looks a bit like a dream, really. I never believed concurrency has to be the boogeyman of programming and I find it really relieveing to see what's been done here.
<Yorlik> So - thanks a lot as a first impression.
<Yorlik> :)
<hkaiser> thanks
akheir has joined #ste||ar
hkaiser has quit [Quit: bye]
aserio1 has joined #ste||ar
aserio has quit [Ping timeout: 240 seconds]
aserio1 is now known as aserio
<heller_> simbergm: jbjnr_: https://github.com/tomtom-international/cpp-dependencies <-- this is the tool I had in mind...
<heller_> fails to parse our cmakelists ;)
<K-ballo> dascandy's, I tried that one on hpx a year or two ago
<K-ballo> I remember it being too coarse grained for us at the time, "components" were represented as subdirectories or something
aserio has quit [Ping timeout: 268 seconds]
<diehlpk_work> K-ballo, A patch release is not much work for me on fedora
david_pfander has quit [Ping timeout: 240 seconds]
<diehlpk_work> I will do a patch release next week, because boost will be updated soon and hpx will not compile for the mass rebuild coming soon
<diehlpk_work> The questions is more what is shipped with fedora 30. HPX 1.2 or HPX 1.3. I do not like to do a major update within one fedora release
jaafar has quit [Ping timeout: 240 seconds]
<zao> (1.68 worked better, as expected)
aserio has joined #ste||ar
nikunj has joined #ste||ar
aserio has quit [Ping timeout: 240 seconds]
hkaiser has joined #ste||ar
adityaRakhecha has quit [Ping timeout: 256 seconds]
<simbergm> diehlpk_work: I was planning on doing the patch release, but I'm happy if you want to do it
<simbergm> what's wrong with 1.2.1 for Fedora 30?
<simbergm> and what would be wrong with 1.3?
<diehlpk_work> simbergm, Misunderstanding! I wanted to apply this patch to my fedora package.
<diehlpk_work> But if you could do a hpx release it is even better
<diehlpk_work> simbergm, Fedora uses boost 1.69 for Fedora 30
<diehlpk_work> And 1.2.1 does not compile with boost 1.69
<simbergm> diehlpk_work: ok, perfect, I can do that (probably next week)
<simbergm> 1.2.0 right?
<simbergm> I saw your email/issue
<diehlpk_work> K-ballo changed master to compile with 1.69
<simbergm> yes, I'll cherrypick that commit to a 1.2.1 release
<diehlpk_work> Ok, but can I have this patches included?
<diehlpk_work> You're doing an HPX 1.2.1?
<diehlpk_work> Forgot about the first message, copy and paste failed
<diehlpk_work> Or would you just do k-ballo"s patch for 1.69
<diehlpk_work> and I will add these two patches again?
<simbergm> diehlpk_work: definitely, I can add those and any other bugfix patches
<K-ballo> are there two different things called 1.2.1?
<simbergm> I'll go through the commits since 1.2.0 and check if there are others
<simbergm> K-ballo: no...? why?
<K-ballo> I'm confused.. but don't mind me, carry on
<simbergm> ok :)
aserio has joined #ste||ar
akheir_ has joined #ste||ar
<diehlpk_work> K-ballo, Some last minute changes from jbjnr_ broke HPX 1.2.0 to compile on all fedora arches
<diehlpk_work> So we had to do a fix and named this unofficial version 1.2.1
<K-ballo> aha, now I am no longer confused
<simbergm> oh, I see... is that called 1.2.1 on fedora?
<hkaiser> diehlpk_work: are those changes on master now?
<hkaiser> (we should have done a patch release then already)
<hkaiser> simbergm: looks like you will need to release 1.2.2
<diehlpk_work> hkaiser, I think this one is on master https://patch-diff.githubusercontent.com/raw/STEllAR-GROUP/hpx/pull/3551.patch
<simbergm> diehlpk_work: is 1.2.0-5 the latest version on fedora? https://apps.fedoraproject.org/packages/hpx
<diehlpk_work> yes
<hkaiser> also, for future reference: we should name such patched releases 1.2.0-1 (or similar)
<diehlpk_work> it is in master too
<simbergm> diehlpk_work: so a 1.2.1 is ok for fedora...? no need for 1.2.2
<diehlpk_work> Essentially, on fedora we use hpx 1.2 base and applied these to patches
<hkaiser> diehlpk_work: ok, pls collect all compatibility fixes that should go on the patch release
<diehlpk_work> I need these two patches
<hkaiser> diehlpk_work: why is it 1.2.0-5 then? where are the other 3 changesets?
<diehlpk_work> I download this package and applied the two patches for fedora
<hkaiser> or were those build system related?
<hkaiser> rpm related, that is
<diehlpk_work> Fedora related
<hkaiser> k
<diehlpk_work> Each time they build our package the uddate this number
<diehlpk_work> Same for opensuse
<hkaiser> so what's the official version number of hpx on fedora29, then? 1.2.0-5 or 1.2.1?
<diehlpk_work> First one
<hkaiser> ok
<hkaiser> so we're safe to release a 1.2.1
<diehlpk_work> Yes, would be 1.2.1-1 in fedora
<hkaiser> perfect
<diehlpk_work> Whenever you have a release candidate, I will test it on fedora build system
<diehlpk_work> Once we have a working solution, I will update the package
<diehlpk_work> So fedora 30 comes with hpx 1.2.1
<diehlpk_work> fedora 31 with hpx 1.3
<simbergm> how often does fedora have a major release? every 6 or 12 months?
jaafar has joined #ste||ar
bibek has quit [Quit: Konversation terminated!]
<diehlpk_work> simbergm, around 6 month
aserio1 has joined #ste||ar
akheir_ has quit [Remote host closed the connection]
bibek has joined #ste||ar
aserio has quit [Ping timeout: 264 seconds]
aserio1 is now known as aserio
aserio has quit [Ping timeout: 240 seconds]
hkaiser has quit [Quit: bye]
mbremer has joined #ste||ar
quaz0r has quit [Ping timeout: 240 seconds]
quaz0r has joined #ste||ar
aserio has joined #ste||ar
parsa[[w]] has joined #ste||ar
aserio1 has joined #ste||ar
aserio has quit [Ping timeout: 240 seconds]
aserio1 is now known as aserio
bibek has quit [Ping timeout: 240 seconds]
parsa[w] has quit [Ping timeout: 240 seconds]
diehlpk_work has quit [Ping timeout: 250 seconds]
bibek has joined #ste||ar
bibek has quit [Quit: Konversation terminated!]
bibek has joined #ste||ar
aserio has quit [Quit: aserio]
hkaiser has joined #ste||ar
<hkaiser> K-ballo: yt?
<K-ballo> partly
<hkaiser> problem with handling future<R&>
<K-ballo> heh, unwrapped was officially declared Not My Problem :P
<hkaiser> no, this has no relation to unwrap
<hkaiser> see my comment
<hkaiser> probably just a missing specialization of get_remote_result or something, no sure
<K-ballo> dataflow? same camp
<K-ballo> :P
<hkaiser> lol
<hkaiser> ok
<K-ballo> let me try to compile that
<hkaiser> I think dataflow just triggers the problem, happens in set_value
<K-ballo> cannot convert A to A*?
Yorlik has quit [Read error: Connection reset by peer]
<hkaiser> yah
<hkaiser> the generated future is future<R&> which internally stores a R*
<K-ballo> is dataflow remote?
Yorlik has joined #ste||ar
<hkaiser> not in this case
<hkaiser> we always use get_remote_result for setting the value but this is not the issue as in this case it shouldn't do anything
<K-ballo> that remote in there is confusing then
<hkaiser> right, doesn't belong there, should go into the remote promise
<hkaiser> I can fix that - but it's unrelated
<K-ballo> glancing at the trace, it would seem the shared state is being fulfiled with a pointer instead of a reference
<K-ballo> or.. other way around?
<K-ballo> I'm not sure I understand get_remote_result
<K-ballo> but yeah, some & or * might be missing in there
parsa[[[w]]] has joined #ste||ar
<hkaiser> set_value receives a reference, which is correct, but internally it needs to set the pointer to the reference's address
<K-ballo> that's what I thought, but unless I'm mistaken that would imply we have no test whatsoever for future<T&>
parsa[[w]] has quit [Ping timeout: 252 seconds]
<K-ballo> I wonder how the remote-ness part plays into that
parsa[[[w]]] has quit [Ping timeout: 252 seconds]
<K-ballo> furthermore, looking at the so post, since async works that logic must already be present somewhere else