aserio changed the topic of #ste||ar to: STE||AR: Systems Technology, Emergent Parallelism, and Algorithm Research | stellar.cct.lsu.edu | HPX: A cure for performance impaired parallel applications | github.com/STEllAR-GROUP/hpx | Buildbot: http://rostam.cct.lsu.edu/ | Log: http://irclog.cct.lsu.edu/
patg has quit [Quit: This computer has gone to sleep]
bikineev has quit [Remote host closed the connection]
EverYoung has joined #ste||ar
Matombo has quit [Remote host closed the connection]
EverYoung has quit [Ping timeout: 246 seconds]
hkaiser_ has quit [Quit: bye]
K-ballo has quit [Quit: K-ballo]
vamatya has joined #ste||ar
ajaivgeorge has quit [Ping timeout: 240 seconds]
EverYoung has joined #ste||ar
EverYoung has quit [Ping timeout: 258 seconds]
ajaivgeorge has joined #ste||ar
patg has joined #ste||ar
patg is now known as Guest67483
Guest67483 has quit [Client Quit]
EverYoung has joined #ste||ar
EverYoung has quit [Ping timeout: 258 seconds]
vamatya has quit [Ping timeout: 260 seconds]
<github> [hpx] sithhell pushed 1 new commit to master: https://git.io/vQ3DB
<github> hpx/master d1898af Thomas Heller: Merge pull request #2712 from STEllAR-GROUP/fixing_wait_for_1751...
<github> [hpx] sithhell deleted fixing_wait_for_1751 at e51b330: https://git.io/vQ3Du
ajaivgeorge has quit [Read error: Connection reset by peer]
ajaivgeorge has joined #ste||ar
ajaivgeorge_ has joined #ste||ar
ajaivgeorge has quit [Ping timeout: 268 seconds]
ajaivgeorge has joined #ste||ar
ajaivgeorge is now known as ajaivgeorge__
ajaivgeorge_ has quit [Ping timeout: 268 seconds]
ajaivgeorge__ has quit [Ping timeout: 240 seconds]
Matombo has joined #ste||ar
EverYoung has joined #ste||ar
EverYoung has quit [Ping timeout: 246 seconds]
Matombo has quit [Remote host closed the connection]
bikineev has joined #ste||ar
david_pfander has joined #ste||ar
<heller> david_pfander: hey
<heller> did the dynamic build on tave work now?
EverYoung has joined #ste||ar
bikineev has quit [Remote host closed the connection]
david_pfander1 has joined #ste||ar
EverYoung has quit [Ping timeout: 246 seconds]
Matombo has joined #ste||ar
david_pfander has quit [Ping timeout: 255 seconds]
david_pfander1 is now known as david_pfander
Matombo has quit [Ping timeout: 268 seconds]
<jbjnr_> I really really need this stack overflow detection stuff
<jbjnr_> ABresting: how's it going - when I can I get my hands on it!
bikineev has joined #ste||ar
<jbjnr_> how can I get a stack overflow when I do not execute ANY tasks.
<jbjnr_> my code segfaults, but it stops doing that if I increase the stack size. I am troubled by this
<jbjnr_> I must have some horrific memory corruption somewhere
Matombo has joined #ste||ar
<ABresting> jbjnr_: Hi John, I am working through it. Trying to integrate a wrapper over libsigsegv to detect the library in user side.
<ABresting> But there is more to it, libsigsegv isn't effective as everyone thinks
<jbjnr_> no worries. Just be aware that you have an eager tester waiting to try it out when it is ready
<ABresting> about your question of getting stack overflow without doing any task
<jbjnr_> if we could insert some assembler into every task to look at the stack pointer .....
<ABresting> so do you consider allocating memory a task?
<jbjnr_> no
<jbjnr_> I mean that my code does not have any async calls so it realy shouldn't have a problem
<ABresting> its is doing the same indeed, but what happens when one thread entes in another threads space then it considers as segfault
<ABresting> async is just a way to do some task in some process address space, but stackoverflow is also when you allocate memory at runtime
hkaiser has joined #ste||ar
<jbjnr_> ABresting: I don't follow you. if one thread tramples on another threads memory because of bugs/etc then that's just bad programming - but if the stack is exceeded - that's something we need to detect specially. I didn't quite understand you
<jbjnr_> the symptoms of both might be the same of course, but detecting the stack overflow woulbe be a life saver
<ABresting> may be this can give you more insight,
<ABresting> also tell em if I am missing something from this link
<ABresting> me*
<ABresting> and I know pthread isn't what hpx is using but boost threads too are a wrapper over it and overall principal remains the same
<jbjnr_> the thing I don't understand about that is that, if you overflow the stack and go into another piece of memory - and that piece of memory has been allocated by you according to the rules - then there will not be a segfault - I don't see hoe we can guarantee a segfault
<ABresting> Idk how I am gonna take care of this, have to discuss this with wash and the group
<jbjnr_> I'd want to inject some code into every task that checked the stack pointer - the compiler inserts the stack pointer increment for every function that's called - but I am not sure how we know how much that is for a given function. If we knew it, then we could properly compare the stack pointer to the allocated size ...
<zao> jbjnr_: Doesn't the same mechanism that handles the guard page autoextend shenanigans also take care of enforcing the stack limit by failing to resolve the guard page or something?
<zao> That's how I _guess_ things work :P
<ABresting> wait let me think
parsa[w] has joined #ste||ar
pree has joined #ste||ar
<jbjnr_> zao: in principle yes. A guard page ought to be enough - but that would mean that every stack allocated by hpx would need to have quite a significant buffer space - I assume we can enable this already (there's a cmake option for it), so then the segfault should always appear - as long as you don't overflow by such a huge amount that you go into another page of good memory ....
<jbjnr_> bbiab
<zao> jbjnr_: Hi and welcome to the brand spanking new Stack Clash CVE :P
<ABresting> think of it this way, we have a process with 10kb stack memory, and a thread which takes 3kb, now inside the thread we put a stack pointer to check if memory has reached there. Now, if one thread is calling itself(recursion) so each time 3kb gets to allocate, so it 3 times is has been done. so we have 9kb process space full. now with one more recursion from the thread leads to stack overflow of the main process, not the
<ABresting> thread,,, so this is one use case where it's false negative!
<ABresting> in this case libsigsegv is showing segfault
<ABresting> but it's actually a stackoverflow of the process
<heller> jbjnr_: we already have, basic code, checking for stack overflows (the whole business with calculating how much space is left). The problem is, that a stack overflow is just UB, that means, it can overwrite all this checking code, and or the memory where we store the information the checking code relies on.
<heller> so you'd really need some mechanism that checks for that *outside* of the system. I guess the OS would be the only place here ... but then it could automagically increase the stack size as needed just as well (as windows does, for example)
<mcopik> jbjnr_: could you explain to me why you believe that you should not have a stack overflow without tasks? you're still executing within an HPX thread, even when no tasking is performed, right? why do you think this thread can't segfault due to limited stack space?
<heller> david_pfander: ping
EverYoung has joined #ste||ar
EverYoung has quit [Ping timeout: 246 seconds]
bikineev has quit [Ping timeout: 240 seconds]
bikineev has joined #ste||ar
<jbjnr_> mcopik: because my code isn't doing any work at all. It's just starting the runtime, doing a few checks and then shutting down. There should be no tasks that need any significant stack.
<ABresting> jbjnr_: then too stackoverflow can happen if you take more space than allocated
<jbjnr_> heller: if I put a std::array[65536] as a local variable in my function - HPX does not know that it overflows my 32KB stack, it only detects this offset if I call another task from that function and then it sees the stack pointer down by 64K.
<jbjnr_> ABresting: yes I know, but I'm not calling any functions that need any stack space.
<heller> jbjnr_: right.
<jbjnr_> It must be some other memory corruption
<heller> are you calling any external APIs from your function?
<jbjnr_> just starting up the hpx runtime. We are debugging our resource partitioner changes and must have broken something tragic in the internals somewhere
<heller> hmm
Smasher has quit [Changing host]
Smasher has joined #ste||ar
Smasher has joined #ste||ar
<jbjnr_> so heller had my first heart fluttering false alarm this morning
<heller> what happened?
<jbjnr_> I was told that our HPX paper had been accepted - and that the meteo one from CSCS was rejected
<ABresting> jbjnr_: then it will be dependent on that external entity, as for the calling thread either it receives the success flag or the exit code with system signal
<heller> jbjnr_: oh? But April has long passed
<jbjnr_> but we didn't have an SC paper, so it is true that the meteo one from CSCS was rejected, but ....
<heller> the tension!
<heller> ...but?
<jbjnr_> well, I was fed hope that our HPX paper was a finalist, but the rumour was about the SC paper - which we didn't write - so it's total pants.
<heller> oh, ok
<jbjnr_> hence my phrase "heart fluttering false alarm this morning"
<heller> right ...
<heller> well ... since it got rejected from the main track, can we assume it is going to be a finalist?
<heller> the meteo one
<jbjnr_> We cannot assume that.
<parsa[w]> pree: your domain design looks brilliant, great job! what's you current status
<pree> parsa[w] : Doing cartesian domains : )
<parsa[w]> got any code that runs?
<pree> Yes , but needs some more working
<pree> on the functions to make it asynchrony
<pree> parsa[w] : your'e up so early :)
<parsa[w]> :)
<pree> I wil write test cases once I finished cartesian domains
<parsa[w]> do you have plans for unstructured domains?
<pree> opaque domains ?
<pree> thinking
<pree> : )
<pree> Before that want to do sparse domains for cartesian domains
<david_pfander> heller: hi
<parsa[w]> pree: makes sense... do you have a repository where you keep your domain implementation?
<pree> I have forked one from hpx
bikineev has quit [Ping timeout: 255 seconds]
<parsa[w]> pree: i don't see anything from you on your fork... is this the repo you're talking about? https://github.com/Praveenv98/hpx
<pree> yes
hkaiser has quit [Ping timeout: 240 seconds]
<pree> pars[w] : I didn't commit anything yet,
<pree> parsa[w] ^^
Matombo has quit [Remote host closed the connection]
<heller> david_pfander: did the dynamic linking build work now?
<heller> did you also see my comment on the issue?
<david_pfander> heller: nope, I added a comment on your build scripts ~10 mins ago
<david_pfander> heller: In short, the application hangs at HPX initialization
<david_pfander> heller: and at the same time, on cori, with very similar build scripts, it works
<heller> how do you get you allocation?
<heller> what's your salloc command?
<david_pfander> heller: srun -C flat,quad -N 1 --partition=normal --time=00:10:00 --pty bash
<heller> the backtrace is a little useless, there are various other threads started, which probably contain more information on where the hang occurs
<david_pfander> heller: could there be a parcelport problem? I had to disable the libfabric parcelport in toolchain file, as I didn't compile libfabric (actually, I tried, but then hpx didn't compile, so I disabled it again)
<heller> no. it's a MPI/SLURM issue
<heller> what you want to do: get your allocation with salloc, then do a srun within that allocation
<david_pfander> heller: I can tests whether it makes a difference, but AFAIK my srun command implies the salloc and immediately gives me a shell on the allocated node
<heller> try srun ls
<heller> you won't get any output either
<david_pfander> heller: worked in the past
<heller> you mean it worked on cori. maybe
hkaiser has joined #ste||ar
<heller> the recommended way is: salloc -> srun
<david_pfander> heller: no, on tave
<david_pfander> heller: and I just tested srun -C flat,quad -N 1 --partition=normal --time=00:10:00 ls
<david_pfander> that works, too
<heller> that's a different thing
<david_pfander> heller: and with salloc -C flat,quad -N 1 --partition=normal --time=00:30:00
<david_pfander> and then srun mic-knl-gcc-build/octotiger-jemalloc-Release/octotiger, it once again hangs
<david_pfander> (I should get an error that I didn't provide input parameters)
<heller> sbatch and salloc allocate resources to the job, while srun launches parallel tasks across those resources.
<heller> give it a while, the first invocation takes a while to load all the binaries etc.
<david_pfander> heller: I allocated 30mins, let's see whether something happens
<heller> program startup takes 8 seconds for me.
<hkaiser> pree: so you're workin gon domains after all?
<david_pfander> heller: that's how it was for me in the past (and is still on cori)
<hkaiser> didn't we agree on you working on domain maps, first and formemost?
<david_pfander> heller: still hanging
<david_pfander> heller: unfortunately, I have to go to a meeting... back in 30 mins
<heller> david_pfander: well, it works for me. I have no idea what you are doing differently.
<pree> hkaiser : sorry I didn't get you
<david_pfander> heller: could you post the list of modules you loaded?
<heller> and I have no time to debug the problem for you either, sorry.
denis_blank has joined #ste||ar
<hkaiser> pree: we discussed you working on domain maps, first
<hkaiser> i.e. distribute the partitions of a partitioned vector
<david_pfander> heller: ok, thanks for your help
<heller> david_pfander: there are various other options to get information on the startup. --hpx:debug-hpx-log, --hpx:list-parcel-ports, --hpx:bind
<heller> since you are sorry
<heller> --hpx:print-bind, not just --hpx:bind, of course
<heller> i'd also suggest to limit the number of threads to 64
bikineev has joined #ste||ar
<pree> hkaiser : I did domains first so only you can map the indices to the localities(domain maps )
<hkaiser> pree: pls post your code somewhere
<pree> okay
<hkaiser> pree: also, do you need distribution policies for those domains?
<heller> david_pfander: since you are only using a single locality, problems with the parcelport is unlikely. there is also the option to completely disable networking alltogether. If you want to debug you application, just looking at thread 0 won't give you a lot. this will always "hang" in the condition variable. it waits until the application gets shut down. this is the main thread, which is, in the case of octotiger, not used.
<pree> No I kept that as separate. That comes to play only in domain map classes
<pree> hkaiser ^^
K-ballo has joined #ste||ar
<hkaiser> parsa[w]: right
<hkaiser> pree: right
<hkaiser> pree: did you talk to parsa[w] now? I mean extensively?
<hkaiser> parsa[w]: are you on top of what pree is doing?
<pree> hkaiser : We have some chats few minutes ago
<hkaiser> k
<github> [hpx] hkaiser pushed 1 new commit to fixing_2699: https://git.io/vQs3R
<github> hpx/fixing_2699 83a63e8 Hartmut Kaiser: Fixing all uninitialized* algorithms
<hkaiser> I'll look at the logs, then
<pree> okay
<hkaiser> pree: I still think your main focus should be the domain maps, but if you think you needs some simple index mapping as well for that - sure
<hkaiser> however I still believe mdspan would be the way to go for the index mappings
<pree> hkaiser : I will post you now. I think you are confusing me
<mcopik> hkaiser: I have created a patch bringing the support for [[fallthrough]], should I create a PR given that HPX does not build currently with gcc 7.1 in CXX17 mode?
<heller> david_pfander: also, pass -u to srun for unbuffered output.
<hkaiser> mcopik: yes, pls - do it anyways. I hope they will fix Boost
<mcopik> hkaiser: sure
<mcopik> I'd like to fix the rest of issues with gcc7 but it's not possible right now
<hkaiser> works only in c++14 mode, I know
<pree> hkaiser : see private
<github> [hpx] mcopik opened pull request #2717: Config support for fallthrough attribute (master...cxx17_fallthrough) https://git.io/vQssh
<hkaiser> mcopik: thanks!
<mcopik> hkaiser: you're welcome :)
bikineev has quit [Read error: No route to host]
bikineev has joined #ste||ar
<mcopik> hkaiser: I had a similar question myself while reading those comments. he's the most active person on r/cpp in discussions on Boost and usually speaks like a veteran and a large contributor
<mcopik> has he contributed anything besides rejected AFIO and Outcome?
<hkaiser> mcopik: yah - that's where it all started; I called him out two years ago that he's trying to ursurp Boost for his commercial needs - he hates me ever since
* zao upvotes the comment for the GERMAN HPC MAFIA
<zao> (personal reasons, actually, for anyone reading logs)
<hkaiser> lol
<hkaiser> Niall: "What the Steering Committee's role is is not fixed in stone. ... It has the power and money..."
<hkaiser> there we come to the heart of it - power and money - that's what he's after
<hkaiser> the day he will become a member of the steering committee will be the day Boost dies
<heller> hkaiser: pointless allegations, since when did anyone ever strive for power and money?
<hkaiser> hah, you're right
<heller> ... and we started singing ... bye bye miss american code
<heller> I like that one: "I hate boost, it's an over-engineering mess of who can out-template who." reply: "Show me the better alternative for Boost.MSM. Or Hana, or Spirit, or Xpressive, or Proto."
<heller> qed?
<hkaiser> nod
pree has quit [Read error: Connection reset by peer]
pree has joined #ste||ar
<heller> why do people constantly bash boost.build though?
<heller> I never had a problem building boost with it, on any platform
EverYoung has joined #ste||ar
<mcopik> heller: for me it some time to figure out how to build it correctly against a specific build of libc++
<mcopik> docs were not helpful
<mcopik> it took*
<heller> sure
<hkaiser> mcopik: should't you add semicolons to the use of HPX_FALLTHROUGH now?
<heller> well, all you need is cxxflags and linkflags, or whatever it is called :)
<heller> mcopik: and you have to learn every other build system as well ;)
Matombo has joined #ste||ar
EverYoung has quit [Ping timeout: 258 seconds]
<mcopik> hkaiser: right, for some reason gcc accepts fallthrough without a semicolon
<mcopik> heller: yes, although I remember that either you could not figure it out immediately or there were some specific problems
<hkaiser> mcopik: ok, may be we don't need any (I'm just not sure myself)
<mcopik> I think we need
<mcopik> they don't answer why it's necessary but they state it is
<hkaiser> k
pree has left #ste||ar ["Ex-Chat"]
pree has joined #ste||ar
ajaivgeorge has joined #ste||ar
ajaivgeorge_ has joined #ste||ar
<parsa[w]> "<hkaiser> parsa[w]: are you on top of what pree is doing?" his design makes sense but i need to see code
ajaivgeorge has quit [Ping timeout: 240 seconds]
<hkaiser> parsa[w]: he said he sent the code to you
<hkaiser> it was also linked here above
<pree> parsa[w] : I sent in mail
<parsa[w]> the domain implementation? yes, but that's a sketch it doesn't compile as is... does it?
<hkaiser> shrug
<hkaiser> parsa[w]: was this domain design discussed somewhere?
<parsa[w]> hkaiser: it's his idea
<hkaiser> sure
<hkaiser> parsa[w]: do you approve it?
<pree> parsa[w] : I will send you with a link to repo.
<parsa[w]> it's just the domain implementation, yes it makes sense... i'm not clear how to proceed with data distribution policies after that.... right now we're looking at it from Chapel users perspective
<parsa[w]> at least i am
<hkaiser> nod, makes sense
<hkaiser> parsa[w]: does the current design integrate/complement/replace/... with the mdspan to be proposed for standardization?
<hkaiser> is it related at all?
<parsa[w]> bryce's proposal?
<hkaiser> yes
<parsa[w]> i don't know much about it
<hkaiser> parsa[w]: might be a good idea to see how that fits into what pree is doing
<pree> hkaiser : I'm looking for solutions how to fix this with mdspan
<pree> but didn't find yet
<pree> parsa[w] ^^
eschnett has quit [Quit: eschnett]
<hkaiser> pree: the question is whether you can use mdspan directly as you domain implementation, or perhaps implement domain on top of mdspan, if necessary
bikineev has quit [Read error: No route to host]
<parsa[w]> hkaiser: for the standard domain yes
bikineev has joined #ste||ar
<hkaiser> parsa[w]: what is a 'standard domain'?
<pree> heller : yt ?
<parsa[w]> domain that refere to an array that's not sparse, unstructured or fancy in anyway, just a contiguous array
<parsa[w]> refer*
<hkaiser> parsa[w]: isn't that what we're targetting anyways?
<heller> pree: best is to discuss your questions here. I currently don't have the resources to discuss those things with you
<hkaiser> I don't think sparse arrays are part of this GSoC project
akheir has quit [Remote host closed the connection]
<parsa[w]> we'd go for sparse domains if we can dense rectangular domains and arrays right
<pree> Not sparse arrays,
<parsa[w]> pree: ?
<hkaiser> in the end sparse arrays are to be pulled in through an external library
aserio has joined #ste||ar
<pree> parsa[w] : I mean as far now
<parsa[w]> okay
<pree> But I really don't know what to answer for hkaiser question
<pree> mdspan < -- > domains
<parsa[w]> domain's just a set of indices... those indices can be in an mdspan
<hkaiser> parsa[w]: that's what I'm saying - do we need a domain implementation from scratch?
<hkaiser> anyways, gtg
hkaiser has quit [Quit: bye]
<pree> parsa[w] : ??
<parsa[w]> pree: domains are used for loops and declaring/working with arrays... they just represent a set of indices
<parsa[w]> that array can be an mdspan
<parsa[w]> does that make sense?
<pree> Okay ! I want to keep things clear . What I have to do now ?
<pree> Is it Okay or Have to change whole thing ?
<pree> parsa[w] ^^
jakemp has joined #ste||ar
<parsa[w]> hpx doesn't have mdspan support right now... you go ahead with arrays i'll ask hkaiser about that when he shows up
<pree> Okay ! Done
<pree> parsa[w] ^^
<mcopik> has anyone had linking problems for HPX on Rostam when building against clang?
<mcopik> cmake correctly set ups clang4.0.0, correctly detects Boost 1.6.4-clang4.0.0 and yet linking of shared library fails on API functions with string
<mcopik> suggesting a mix between different string implementations?
eschnett has joined #ste||ar
ajaivgeorge_ has quit [Ping timeout: 240 seconds]
diehlpk_work has joined #ste||ar
<heller> mcopik: different standards? different standard libraries?
<mcopik> heller: could we break this by enforcing C++17 on clang? it's a recent update
ajaivgeorge has joined #ste||ar
<heller> mcopik: 1) i have no idea how you built boost 2) I have no idea how you build hpx 3) I have no idea what the actual error is
<heller> so, no idea how to fix it ;)
<mcopik> heller: clang4.0.0, Boost 1.6.4-clang4.0.0 on Rostam
<mcopik> from modules
<mcopik> I didn't build Boost
Matombo has quit [Ping timeout: 268 seconds]
<heller> yeah, looks like a mismatch there
<heller> stdlib/std variant
<heller> mcopik: -DCMAKE_CXX_FLAGS=-stdlib=libc++ <--- this is the magic ingredient
<mcopik> heller: now it makes sense. but why Boost is built against libc++ on Rostam?
<mcopik> I assumed it's the default clang build on this platform
<mcopik> I know it's a sign of mixing libstdc++ vs libc++, it's just that I didn't think about it and there is no information on that on Wiki https://github.com/STEllAR-GROUP/hpx/wiki/Running-HPX-on-Rostam
<mcopik> can I edit it and add a short info that clang's build of Boost is linked against libc++?
<heller> Yes, please do
<diehlpk_work> mcopik, jbjnr_ parsa[w] June 26 16:00 UTC Mentors and students can begin submitting Phase 1 evaluations
<K-ballo> the default clang build isn't C++11 capable, is it? it uses some arcane libstdc++ version
pree has quit [Ping timeout: 255 seconds]
bikineev has quit [Ping timeout: 240 seconds]
EverYoung has joined #ste||ar
pree has joined #ste||ar
EverYoung has quit [Ping timeout: 246 seconds]
bikineev has joined #ste||ar
Matombo has joined #ste||ar
bikineev has quit [Ping timeout: 240 seconds]
denis_blank has quit [Quit: denis_blank]
bibek_desktop has quit [Quit: Leaving]
akheir has joined #ste||ar
denis_blank has joined #ste||ar
bibek_desktop has joined #ste||ar
Matombo has quit [Ping timeout: 268 seconds]
Matombo has joined #ste||ar
Matombo has quit [Remote host closed the connection]
Matombo has joined #ste||ar
EverYoung has joined #ste||ar
<aserio> heller: yt?
EverYoung has quit [Ping timeout: 276 seconds]
pree has quit [Ping timeout: 255 seconds]
pree has joined #ste||ar
pree has quit [Ping timeout: 255 seconds]
bikineev has joined #ste||ar
pree has joined #ste||ar
ajaivgeorge_ has joined #ste||ar
<heller> aserio: what up?
wash has quit [Quit: Lost terminal]
<aserio> I was wondering if you would send an email to contact@stellar-group.org to see if it works right
<aserio> also you will need to send the email not with your gmail addess...
<aserio> as I have added you to that one
ajaivgeorge has quit [Ping timeout: 240 seconds]
<aserio> heller: ^^
<zao> aserio: Sent a message from zao@zao.se right now.
<aserio> \o/ I can click buttons in the right order
<aserio> zao: thanks!
<zao> response gotten.
<diehlpk_work> This special issue of Journal of Humanistic Mathematics will explore the intersectionality of mathematics and motherhood with the aim of empowering more women in mathematics to pursue careers in the mathematical sciences and the professoriate. The issue will feature articles, autobiographical stories, poetry, and essays from a diverse set of women who have found success and balance in their mathematics career and motherhood.
<diehlpk_work> Who likes to submit?
hkaiser has joined #ste||ar
denis_blank has quit [Quit: denis_blank]
bikineev has quit [Remote host closed the connection]
aserio has quit [Ping timeout: 276 seconds]
david_pfander has quit [Ping timeout: 240 seconds]
bikineev has joined #ste||ar
jbjnr has joined #ste||ar
jbjnr_ has quit [Ping timeout: 255 seconds]
<heller> I need a time machine...
ajaivgeorge_ has quit [Remote host closed the connection]
ajaivgeorge_ has joined #ste||ar
EverYoung has joined #ste||ar
<pree> People : How to distinguish the continuations to execute based on future.get() result by future.then
<pree> Suppose I have a future<bool> f
EverYoung has quit [Ping timeout: 258 seconds]
<pree> f.then( execute 1 when f.get() == 1 execute 2 when f.get == 0 )
<pree> How to do it I didn't look at docs
aserio has joined #ste||ar
<heller> Just do it like that.
<heller> f.get() will return a bool. Just C++
<pree> heller : Thanks, I have mistaken f.then() will call f.get() internally
<heller> You can combine both
<pree> heller : what "combine both " ? Are you saying combine f.then() && f.get() calls ..
<heller> sure
<heller> future<void> ff = f.then([](future<bool> f){ if (f.get()) { then(); } else { do_otherwise(); }});
<heller> just C++
<heller> no magic
<heller> (but you never know, I saw a tiny flickering of octarine the other day)
<zao> I love it when you're painfully reminded that you're writing actual C++ when running into dumb bugs in Phoenix and Spirit.
<pree> heller : nice ! I got what I want Thank you
<heller> zao: that's the spirit!
<heller> zao: remind me, I still need to send a wizard hat to a undisclosed location, accepting donations
<pree> heller : One doubt! We know we are attaching continuation for future<bool> f, But why we are explicitly passing future<bool> into lambda. This one is there for me when I start looking hpx
<pree> But I simply leave it such as
<heller> it's not a HPX specific thing
<heller> it's a C++ thing
<pree> Oh Okay thanks
<heller> the answer to the question is: asynchronous operations might throw exceptions, you might want to get notified, future<T>::get rethrows an exception that might had happened
<heller> that's why continuations need to pass the original future along
<heller> also, future<T>::then invalidates this. so you loose any handle to it
<pree> Oh ! good answer ! Now only remind that you said this in one of video lectures with john
<pree> heller : Thanks you
<heller> sure
<heller> so the real question is: why do we even bother with creating documentation and tutorials?
<pree> heller : Is this to me ? I'm can't able to understand what you are saying
<pree> I'm not comfortable with foreign english
<zao> I keep forgetting the existence of exceptions and the nifty way that you can tunnel them in modern C++.
<heller> ok
<heller> pree: i'l lkeep that in mind
<pree> heller : It is somewhat different in india !
<zao> No doubt :P
<pree> It's common when you try to learn oher language
<pree> zao : Got experience with your mumbai student ?
<pree> : )
<zao> Oh boy, his english was atrocious when I started.
<zao> pree: At least you don't say "u", that's a great start :)
<heller> pree: you'll manage, I am sure
<pree> heller : Thank you
<pree> zao : It's chat language , I barely chat
<pree> zao : It's common when you learn other languages ! If you learn our language I will struggle to understand somethings : )
<pree> haha
<pree> * I -> you
<zao> Så sant som det är sagt.
<pree> zao : sorry I don't have " Tamil " keyboard
<pree> :)
<zao> :D
<pree> I think code will be universal language
<pree> xD
<hkaiser> heller: yt?
<heller> hkaiser: hey
<hkaiser> heller: see m, pls
<hkaiser> pm*
<thundergroudon[m> hkaiser: any guidance regarding the selection of benchmark for HPXCL?
<thundergroudon[m> I had sent an email in the community previously
<hkaiser> thundergroudon[m: will respond to the email later today
<hkaiser> sorry, I was out for 2 weeks
<thundergroudon[m> Sure! :)
<thundergroudon[m> Thank you so much!
denis_blank has joined #ste||ar
hkaiser has quit [Quit: bye]
bikineev has quit [Remote host closed the connection]
zbyerly has joined #ste||ar
eschnett has quit [Quit: eschnett]
pree has quit [Quit: AaBbCc]
hkaiser has joined #ste||ar
aserio has quit [Quit: aserio]
bikineev has joined #ste||ar
bikineev has quit [Remote host closed the connection]
bikineev has joined #ste||ar
jakemp has quit [Ping timeout: 260 seconds]
akheir has quit [Remote host closed the connection]
ajaivgeorge has joined #ste||ar
ajaivgeorge_ has quit [Ping timeout: 255 seconds]
Matombo has quit [Remote host closed the connection]
vamatya has joined #ste||ar
bikineev has quit [Remote host closed the connection]
bikineev has joined #ste||ar
denis_blank has quit [Quit: denis_blank]
mbremer has joined #ste||ar
jgoncal has joined #ste||ar
eschnett has joined #ste||ar
EverYoung has joined #ste||ar
mbremer has quit [Ping timeout: 260 seconds]
EverYoung has quit [Ping timeout: 246 seconds]