hkaiser changed the topic of #ste||ar to: STE||AR: Systems Technology, Emergent Parallelism, and Algorithm Research | stellar-group.org | HPX: A cure for performance impaired parallel applications | github.com/STEllAR-GROUP/hpx | This channel is logged: irclog.cct.lsu.edu | Everybody: please respond to the documentation survey: https://forms.gle/aCpNkhjSfW55isXGA
Yorlik_ has joined #ste||ar
Yorlik has quit [Ping timeout: 272 seconds]
K-ballo has quit [Quit: K-ballo]
hkaiser has quit [Read error: Connection reset by peer]
diehlpk_work has quit [Remote host closed the connection]
<gnikunj[m]> ms: who manages the data when we're setting and getting thread data? From this test, it seems like the responsibility is on the user: https://github.com/STEllAR-GROUP/hpx/blob/2d09e4da41ada84535e23ccb2f11259021dd37af/tests/regressions/threads/thread_data_1111.cpp
<ms[m]> gnikunj: I think the confusion might just be due to bad naming
<ms[m]> the thread_data in that test is exactly the "user-provided" thread data that hkaiser mentioned earlier
<ms[m]> not sure if that's what you're thinking about
<ms[m]> there's also the thread_data which holds the actual thread stack, priority, annotation, etc. etc.
<gnikunj[m]> Yes, I want to add data to a thread using those 2 functions. The functions accept std::size_t so we have to do a reinterpret case to convert to send in a 64bit pointer. I wanted to ask who manages the deletion of the pointer that we're setting in the thread?
<gnikunj[m]> does it get deleted within the destructor or are we as the user supposed to manage the pointer?
<ms[m]> you are, there's no way for the thread itself to know how or what you want to be deleted
<gnikunj[m]> Right, got it.
hkaiser has joined #ste||ar
<hkaiser> ms[m]: thanks for the release! very nice.
<hkaiser> it's one of the best releases ever ;-) !
<ms[m]> hkaiser: *the* best, no? :P
<hkaiser> +1
<ms[m]> in any case, I'm just the messenger
<ms[m]> so thanks to everyone for making it the best release!
<hkaiser> ms[m]: let's hope it won't be the last
<ms[m]> announcements etc. are ready as usual, but will come once docs are up
<hkaiser> sure, I'll do vcpkg updates, reddit, etc.
<ms[m]> hkaiser: that sounds ominous, I hope so too...
<ms[m]> good, thanks!
<rachitt_shah[m]> <ms[m] "announcements etc. are ready as "> Do you need a hand with the docs?
<hkaiser> always
<ms[m]> rachitt_shah: thanks for the offer, but I think circleci is on it ;)
<ms[m]> how's it going for you?
<rachitt_shah[m]> Pushing the PR right now, then would be looking after doxgyen+wiki
<rachitt_shah[m]> I might need help on doxygen
<ms[m]> all right, just let me know when you need help
K-ballo has joined #ste||ar
<rachitt_shah[m]> Sure, congrats on the release 🚀
Yorlik_ is now known as Yorlik
* Yorlik waves to hkaiser
<hkaiser> hey Yorlik
<hkaiser> how's life?
<Yorlik> Just dropping by in the middle of some renovations.
<Yorlik> I started working on our project again, but build system again.
<Yorlik> Want to get the CI going.
<hkaiser> nod, good luck
<Yorlik> Apart from that super busy on the place - renovations, work on the ground, driving the tractor we just got :D
<hkaiser> hah, reminds me of the old joke about the kid that liked to drive a tractor
<zao> Hey there o/
<hkaiser> gnikunj[m]: were you able to reproduce the asan problem?
<gnikunj[m]> The tests are still building on my machine ;/
<hkaiser> ahh
<gnikunj[m]> I should be able to do so by today
<hkaiser> nice, thanks
<gnikunj[m]> hkaiser: do we have a new allocation on loni?
<gnikunj[m]> I want to benchmark something and I need a stable x86 cluster for it
<hkaiser> gnikunj[m]: sure, please ask Dominic
<gnikunj[m]> Ohh ok. Let me email him and CC you.
<hkaiser> ms[m]: may I ask you for help with this: https://cdash.cscs.ch/buildSummary.php?buildid=170230?
<hkaiser> I can't even compile the thread_pool_scheduler test with msvc :/
<ms[m]> hkaiser: yeah, sure, I'll try to have a look (not immediately, but this week for sure)
<ms[m]> that's your refcounted thread data pr, right?
<zao> Breaking MSVC? I'm amazed and surprised ;)
<hkaiser> ms[m]: yes
<hkaiser> zao: never happened before - and yet again
<zao> cd ..
<zao> <_<
<zao> The Windows 11 terminal is a bit weird in how it renders the caret, thought I had focus.
<zao> Ah, you need C++17 to get that test included, explains why I was missing it with the default C++14 on MSVC.
<zao> I'm guessing this is what hkaiser sees? C:\stellar\hpx-pr5441\libs\core\execution_base\include\hpx\execution_base\sender.hpp(718): fatal error C1202: recursive type or function dependency context too complex
<zao> (this on VS 16.10.3)
<zao> Not specific to the PR, btw.
qiu has joined #ste||ar
<hkaiser> zao: yes
<hkaiser> yes, that must have been introduced recently as I was able to compile things before :/
<hkaiser> so much for having a release...
<zao> Tangentially, VS 2022 Preview (17.0.0 Preview 1.1) has an earlier build failure that 2019 doesn't: https://gist.github.com/zao/7cf282f909726eeccf4617305f5a16f4
<zao> Not sure how much we care about prerelease VS versions tho :)
<zao> Well, not earlier, different.
<zao> (concurrent ninja output tricked me)
<zao> Ah no, it's the same error, just with more/different context.
diehlpk_work has joined #ste||ar
<ms[m]> "context too complex" 😢
<hkaiser> zao: yes, I have seen that as weel, this is caused by a change I made to the sender/receiver when_all algorithm
<hkaiser> I have not found the cause for the actual 'too complex' problem yet
<ms[m]> I can go ahead and remove the implicit executor as scheduler or vice versa stuff from the cpos, that might help (I want to do it anyway, so worth a try)
<hkaiser> will bisect
<hkaiser> ms[m]: nah, it worked before
<zao> My curiosity whether I could reproduce it has been satisfied :)
<hkaiser> let me find the culprit first
<ms[m]> 👍️
<hkaiser> zao: thanks
<ms[m]> hkaiser: I'll go ahead with the release announcement in any case now
<hkaiser> ms[m]: sure
<ms[m]> we can do a patch release again soon if we manage to get that fixed
<hkaiser> nod
<ms[m]> let me know if you find anything that's off!
<hkaiser> nice, thanks!
<diehlpk_work> Can one kill all hanging jenkins jobs on rostam?
<gnikunj[m]> hkaiser: is it a single node run? https://github.com/STEllAR-GROUP/hpx/runs/3052374495
<hkaiser> gnikunj[m]: yes, but 2 localities
<gnikunj[m]> hkaiser: I see :/ I spent 3h building tests on my laptop only to be told an assertion error that i need to run on more than 1 locality ;_;
<gnikunj[m]> let me reproduce it on rostam and see
<hkaiser> gnikunj[m]: nah
<hkaiser> you can run it on your laptop
<hkaiser> hpxrun -p 2 -t 1 ./executable
<gnikunj[m]> let me try that
<zao> I noticed something odd trying to run tests on my Windows machine, a lot of the early tests try to invoke just hpxrun.py, with no arguments.
<zao> Not sure if that's by design and the results are ignored somewhere.
<gnikunj[m]> hkaiser: I can't reproduce ASAN on my machine :/
<ms[m]> diehlpk_work: alireza can
<gnikunj[m]> nk@f3lix:~/projects/hpx/build(ref_counted_thread_data)$ ./bin/hpxrun.py -l 2 -t 1 ./bin/async_replay_distributed_test
<gnikunj[m]> Replay: 0.0394612
<ms[m]> zao: ctest -v output?
<zao> Might've been due to not having built all targets, it seems to be correctly missing missing test EXEs now.
<hkaiser> gnikunj[m]: nod, thought so
<hkaiser> gnikunj[m]: btw, I think the ASAN problem shows only up if you run it on more than one core
<gnikunj[m]> hkaiser I see. My clang build us currently on its way. I'll try running it on more than one core.
<gnikunj[m]> * hkaiser I see. My clang build is currently on its way. I'll try running it on more than one core.
<hkaiser> +1 thanks
<gnikunj[m]> hkaiser: my `make tests -j4` didn't build all tests. It failed at hello_world_client screaming
<gnikunj[m]> ==176493==ERROR: LeakSanitizer: detected memory leaks
<gnikunj[m]> 1/1 Test #1: hello_world_test .................***Failed 0.32 sec
<hkaiser> gnikunj[m]: lol
<gonidelis[m]> how do we make the compiler just warn us for the deprecation warnings and not throw an error
<gonidelis[m]> ?
<gnikunj[m]> gonidelis: are you talking about `-Wno-error=foo` (foo is the error that you want as warning)]
<gnikunj[m]> * gonidelis: are you talking about `-Wno-error=foo` (foo is the error that you want as warning)
<gonidelis[m]> yup
<gonidelis[m]> what about deprecation?
<gonidelis[m]> got it thanks!
<hkaiser> diehlpk_work: many thanks!
<diehlpk_work> hkaiser, ms[m] 1.7.0 built on x86
<diehlpk_work> So I think we are fine
<diehlpk_work> all others arch will finish by tomorrow morning
qiu has quit [Quit: Client closed]
<ms[m]> diehlpk_work: sounds good, thanks!
<hkaiser> ms[m]: btw, the release is good - the problem occurs only if sanitizers are enabled - so it's really a compiler problem