aserio changed the topic of #ste||ar to: STE||AR: Systems Technology, Emergent Parallelism, and Algorithm Research | stellar.cct.lsu.edu | HPX: A cure for performance impaired parallel applications | github.com/STEllAR-GROUP/hpx | Buildbot: http://rostam.cct.lsu.edu/ | Log: http://irclog.cct.lsu.edu/
hkaiser has quit [Read error: Connection reset by peer]
hkaiser has joined #ste||ar
EverYoung has quit [Ping timeout: 255 seconds]
mcopik_ has quit [Ping timeout: 240 seconds]
eschnett has joined #ste||ar
Matombo444 has joined #ste||ar
pree has joined #ste||ar
Matombo has quit [Ping timeout: 248 seconds]
StefanLSU has joined #ste||ar
Guest31423 has quit [Quit: This computer has gone to sleep]
Matombo444 has quit [Remote host closed the connection]
Matombo has joined #ste||ar
StefanLSU has quit [Quit: StefanLSU]
patg has joined #ste||ar
patg is now known as Guest45685
pree has quit [Quit: AaBbCc]
StefanLSU has joined #ste||ar
StefanLSU has quit [Quit: StefanLSU]
K-ballo has quit [Quit: K-ballo]
Guest45685 has quit [Quit: See you later]
hkaiser has quit [Quit: bye]
vamatya has joined #ste||ar
<heller> zbyerly: extension
<heller> Yay
<heller> I knew it ;)
<zbyerly_> heller, that's a good sign right
<heller> Yes
<heller> We have till the 29th to finish that paper
parsa has joined #ste||ar
zbyerly_ has quit [Ping timeout: 240 seconds]
zbyerly_ has joined #ste||ar
parsa has quit [Quit: Zzzzzzzzzzzz]
parsa has joined #ste||ar
Matombo has quit [Remote host closed the connection]
AnujSharma has joined #ste||ar
vamatya has quit [Ping timeout: 240 seconds]
parsa has quit [Quit: Zzzzzzzzzzzz]
<github> [hpx] sithhell pushed 1 new commit to master: https://git.io/v5Q6u
<github> hpx/master 9c81e74 Thomas Heller: Merge pull request #2902 from STEllAR-GROUP/fix_service_executor...
<github> [hpx] sithhell pushed 2 new commits to master: https://git.io/v5Q6g
<github> hpx/master 09157fb Thomas Heller: Fixing partitioned_vector registration...
<github> hpx/master 4cb2256 Thomas Heller: Fixing partitioned_vector creation...
<github> [hpx] sithhell closed pull request #2901: Fixing partitioned_vector creation (master...partition_vector_fix) https://git.io/v5MnE
<github> [hpx] sithhell deleted partition_vector_fix at a77d93f: https://git.io/v5Q6i
<heller> now, let's see how much green we'll get now
<heller> jbjnr: I am writing a unit test for the used_processing mask now
<jbjnr> ok. thanks.
<github> [hpx] sithhell pushed 1 new commit to throttle_cores: https://git.io/v5Q6b
<github> hpx/throttle_cores da663c9 Thomas Heller: Fixing Copyright
<heller> so build times seem to have been drastically reduced now!
<github> [hpx] sithhell pushed 1 new commit to fix_rp_again: https://git.io/v5Qid
<github> hpx/fix_rp_again 1135443 Thomas Heller: Adding unit test for used pu mask.
<heller> jbjnr: #2878 next, then #2900 then #2891 (If I get approved, finally)
<jbjnr> commented
<heller> counting the green: 2/34
<jbjnr> while 2<34 ; do echo "fail"; done
<jbjnr> :)
<heller> counting the green: 4/34
<heller> there are two tests known to fail
<heller> still
<heller> inclusive_scan_executor_v1 and one of the papi tests
<jbjnr> I will look at papi if you like
<heller> the papi test fails because the PAPI counter isn't available on one of the platforms
<heller> so we'd need to select a counter that's available everywhere
<heller> counting the green: 5/34
<heller> (also, the 6 known builders known to fail to build core not counted)
<zbyerly_> heller, the OpenSuCo extension email got caught by gmail's spam filter, i'm glad you told me about it!
<jbjnr> pfff
<heller> zbyerly_: yes!
<jbjnr> you still have to count the fails. even if they are known ones
<jbjnr> we should just drop intel
<heller> counting the green: 6/34
<heller> counting the green: 6/40
<heller> better?
<jbjnr> is the mask in here a nume node mask - they are not very clear in the explanation
<jbjnr> ooh.
<jbjnr> easier
<heller> yup. and portable
<heller> jbjnr: still waiting on the graph ;)
<heller> counting the green: 7/40
<jbjnr> you'll need some patience
<heller> patience sucks
<jbjnr> I'm away most of next week and all weekend, so graph will be a long time.
bikineev has joined #ste||ar
<heller> counting the green: 8/40
<heller> too bad
<heller> would have been nice if i could have included the graph in my talk!
<jbjnr> to give you an idea of where we are headed, the bottom 3 of this graph are close https://pasteboard.co/GKrzOxr.png
<heller> counting the green: 9/40
<heller> excellent!
<heller> and we still have some threads to go!
<heller> weee!
<jbjnr> the top 3 - block size 256 are still a bit slow, but I'll put back all the tweaks I've been tweaking and breaking and work on that again
<jbjnr> no. that's HP queues, not threads
<heller> what's to beat is: 21/40 greens
<jbjnr> proper graphs to follow when I clean up all the tweaks that need untweaking
<heller> jbjnr: ahh, gotcha
<heller> strange viz then
<jbjnr> I just chose to plot the parsec at 36
<jbjnr> strange viz - yes, non final plots
<heller> what's the significance of the smaller block sizes?
<jbjnr> more smaller tasks, more pressure on the scheduler than with fewer larger tasks
<heller> ok, so just theoretical value then, I guess
<heller> or well, for the sake of showing the weaknesses we still have ;)
<jbjnr> (and you can ignore the red hpx plot, since now mostly it's fixed internally anyway, so for future plots, I'll drop it)
<heller> counting the green: 10/40
<heller> great
<heller> jbjnr: is this with my patch to stay within a pool if no executor is given?
<heller> would be interesting if there is an implication for you there
<jbjnr> yes, but I'm only using 1 pool on these tests
<jbjnr> 2 pools once I got distributed
<jbjnr> go^
<heller> ok
<heller> good to know then
<jbjnr> distributed is broken at moment, hence delay in graphs
<heller> you should really switch to one pool per numa domain
<jbjnr> but if I get a good one before 21st, I'll make sure you have a copy
<heller> great!
<heller> would be end of 21st for you anyways
<zbyerly_> oh hey guys while i have you here
<zbyerly_> i'm getting an error that says something like "tried to access pool 1 (starts at 0) there are only 1 pools"
<heller> we're always here ;)
<zbyerly_> let me reproduce it real quick
<heller> hmm, that shouldn't happen, do you try to use more than one thread pool?
<zbyerly_> i think so
<zbyerly_> i'm not doing anything differently than I normally do
<heller> something is different now
<heller> let's figure out what it is
david_pfander has joined #ste||ar
<zbyerly_> i'm using intel's mpi
<zbyerly_> could that be it
<heller> nope
<zbyerly_> let me get you the full error
<heller> let me get you the code
<jbjnr> heller: is your pool assignment for thread in master? that might have broken it
<jbjnr> if it returns 1 instead of 0, then this error would appear
<jbjnr> the default pool should always be pool 0
<heller> jbjnr: pool assignment?
<heller> jbjnr: I don't remember any pool assignments
<jbjnr> the error is coming from get_pool_name. might be a perf counter cock up
<zbyerly_> let me try it without the perf cntrs
<jbjnr> zbyerly_: can you get more stack backtrace? - yes or disabkle perf counters might help for a quick test
<jbjnr> heller: I meant that niow a thread is always launvched on the pool the parent came from - previously it was always pool#0 if the user didn't use a special executor
<jbjnr> so if your change was in master, it might be a suspect
<jbjnr> but I now suspect perf counters, cos they ask for pool names, nobody else needs them usually.
<jbjnr> apart from custom_pool_executor
<zbyerly_> works without the perf ctrsnas
<jbjnr> zbyerly_: if a small test can produce the error, feel free to post one.
<jbjnr> aha!
<jbjnr> ok, then please file an issue with as much detail as poss. hartmut knows what to do as he fixed the perf counters for pools
<heller> zbyerly_: what's the perf counter parameter you are using, btw?
<jbjnr> pool idle rate I'II guess
<zbyerly_> --hpx:print-counter=/arithmetics/mean@/threads{locality#*/worker-thread#*}/time/ave
<zbyerly_> rage --hpx:print-counter=/threads{locality#0/total}/idle-rate --hpx:print-counter=/arithmetics/mean@/threads{locality#0/worker-
<zbyerly_> thread#*}/time/overall
<zbyerly_> i'll isolate it
<jbjnr> crap I've disabled all counters in my build so I can't test it...
<zbyerly_> it also only shows up in distributed
<jbjnr> interesting.
<jbjnr> someone asked about a parcelport citation - would my rma serialization paper be any use for that?
<jbjnr> it's parcelport related
<heller> jbjnr: ah, yes, that shouldn't affect anything
<zbyerly_> yeah it shows up with hello_world
<heller> jbjnr: the pool is not looked up by the name, each thread stores the pool it was run on
<jbjnr> excatly why I said it would not be the probable cause.
<jbjnr> I do not like the intro - moore's law isn't over yet, so the first part is just not correct.
<zbyerly_> okay, so i figured something out
<heller> it is said to be over though
<zbyerly_> jbjnr, what else don't you like about it
<zbyerly_> --hpx:print-counter=/threads{locality#0/total}/idle-rate
<zbyerly_> i'm asking for locality #0
<zbyerly_> if i ask for locality #*
<zbyerly_> it doesn't happen
<zbyerly_> this is with two compute nodes, 2 localities
<heller> counting the green: 15/40
taeguk has joined #ste||ar
taeguk has quit [Client Quit]
<heller> counting the green: 16/40
<github> [hpx] sithhell pushed 1 new commit to master: https://git.io/v5QHc
<github> hpx/master c50c064 Thomas Heller: One more attempt to fix the service_executor...
<heller> next round
<heller> counting the green: 17/40
<heller> counting the green: 18/40
<zbyerly_> i'll pray for 19 heller
<zbyerly_> submitted an issue on that thing
<heller> counting the green: 21/40
<zao> What on earth are you fine people up to?
<heller> zao: MBGA!
<zao> Those are letters :)
<jbjnr> heller: = thread pool terrorist
<jbjnr> I'm going to admit to being slightly impressed by the green. Not bad at all. I'm almost ready to fogive you for breaking everything else
<heller> zao: Make Buildbot Green Again!
<heller> jbjnr: my pleasure ;)
<zao> :D
<zao> For the mem counters, resident and virtual map straight to RES and VIRT?
<jbjnr> the tasks that are triggered by my MPI messages are being directed to the MPI pool thanks to heller's changes, so that's why our distributed performance has dropped off a cliff.
<heller> jbjnr: shit
<heller> jbjnr: what do you suggest?
<jbjnr> this is entertaining
<jbjnr> mpi_executor
<jbjnr> pants
<jbjnr> wrong clipboard
* heller is glad you are at work and not browsing porn
<jbjnr> the green graphs is fantastic!
<heller> lol @ graph
<heller> should be a easy fix, shouldn't it?
<jbjnr> you see the 512 block single node is great, then there's a slight drop ...
<jbjnr> it's running everything on 1 thread
<heller> bummer
<heller> well
<heller> you know what to do ;)
<heller> jbjnr: revert that one ^^
<jbjnr> I will come up with a better plan though than just reverting it in the long term. I think that using the same thread pool is probably a good default choice, but maybe we can add more execution policies and suchlike, or add pool options to diable default threads.
<jbjnr> thre's anothe paper in this multi-thread pool material ...
<heller> yup yup
<zao> Oh dear... I'm going to have to link a library (-lkvm) to implement mem_counter_bsd.cpp
<zao> This'll be fun.
Matombo has joined #ste||ar
bikineev has quit [Remote host closed the connection]
<jbjnr> heller: do we have a function that converts a mask_type into an hwloc_cpu set?
<jbjnr> or mask type is a bitset<> or an int and thereis is an hwloc bitmap, but I can't seem to locate a converted
<jbjnr> converter
<jbjnr> nvm found a ton of hwloc bitmap functions
<jbjnr> will iterate by hand
<heller> ok
<heller> please do
bikineev has joined #ste||ar
bikineev has quit [Ping timeout: 255 seconds]
bikineev has joined #ste||ar
bikineev has quit [Ping timeout: 248 seconds]
bikineev has joined #ste||ar
K-ballo has joined #ste||ar
pree has joined #ste||ar
<zao> heller: I need to add a linker flag to a target from add_hpx_component. Can I reliably do `target_link_libraries(foo_component PRIVATE "-lmeh")`?
<heller> yes
<zao> The library only applies on particular CMAKE_SYSTEM_NAMEs.
<zao> heller: CMake got upset if I used the non-keyword form of target_link_libraries.
<zao> Do we always use the PRIVATE/PUBLIC flavor?
<zao> (I've gone and implemented hpx_memory for FreeBSD and DragonFlyBSD, and technically also NetBSD/OpenBSD if someone figures out what the kernel structure fields are named)
<heller> cool!
<heller> yes, you always have to use it
hkaiser has joined #ste||ar
<jbjnr> grrrr: "the hpx runtime system has not been initialized yet"
<zao> Heh, some example uses __argv... I could either try to slurp them out via KVM, or just disable the example.
<zao> Can I disable particular examples?
<jbjnr> not easily
<jbjnr> in cmakelists, just # comment out the one you dislike
<jbjnr> ^ones
<zao> #if defined(silly_os) int main() {} #endif
<zao> jbjnr: Thing is that it's conditional on the OS.
<zao> So I guess I could if() that in CMake then.
<jbjnr> yes
<zao> Gonna see how much else is b0rken now that I build more of HPX.
<jbjnr> anyone know if there is an errno to string in hpx anywhere?
<jbjnr> reuable for exceptions etc
<heller> ok, I totally broke it, why is launch_process timing out now?
<jbjnr> child task not killed
<heller> this is soo frustrating
<heller> it worked the instance before, didn't it?
<jbjnr> connection not made, child hangs, test times out. after that all subsequent tests usually fail due to network in use already error
<heller> yes
mcopik_ has joined #ste||ar
bikineev has quit [Ping timeout: 248 seconds]
<hkaiser> jbjnr: sure there is
<jbjnr> ooh. I bet I just set ec=errno and let hpx doe the rest with HPX_THROWS_IF....
<jbjnr> thanks hkaiser
<hkaiser> most welcome
<jbjnr> boost::outcome ...
<hkaiser> jbjnr: lol
<hkaiser> yah, that's what we're missing
<jbjnr> hkaiser: do you want to see my truly epic graph that's heller's fault
<zao> Yay, seems like in a stock build of everything, only examples/quickstart/CMakeFiles/init_globally_exe.dir/init_globally.cpp.o is silly.
<hkaiser> yes, pls
<jbjnr> Green line please :) https://pasteboard.co/GKsuXKT.png
<hkaiser> zao: how do you extract __argv/__argc on freebsd
<hkaiser> jbjnr: nice! ;)
<jbjnr> it really shows off how good we are!
<hkaiser> epic indeed
<zao> hkaiser: From a shared library? kvm_getargv.
pree has quit [Ping timeout: 252 seconds]
<hkaiser> didn't we just fix that recently?
<zao> hkaiser: Might be able to user kvm_getenvv too to grab the environment.
<zao> (regarding the freebsd_environ hacks we did in the past)
<zao> *use
pree has joined #ste||ar
<hkaiser> jbjnr: you have the tendency to advertize your mishaps causing more trouble for yourself afterwards
<zao> Unfortunately, kvm_* introduces a dependency on -lkvm.
<zao> So makes the build slightly worse.
<zao> Und jetzt, coffee!
<jbjnr> hkaiser: no worries. this one is easily fixable. I'm just starting a new run now to hopefully get the "graph of our dreams" (TM)
pree has quit [Ping timeout: 240 seconds]
<github> [hpx] StellarBot pushed 1 new commit to gh-pages: https://git.io/v57Yj
<github> hpx/gh-pages 88eb97d StellarBot: Updating docs
<hkaiser> jbjnr: in the end you brought down Thomas' decision to pull the plug on GB onto yourself, right ? ;)
Matombo has quit [Ping timeout: 240 seconds]
pree has joined #ste||ar
<jbjnr> hkaiser: yes, but that's not a bad thing. you do not need us and you can go on anyway. We have to work on a CSCS project not an LSU one. If you carry on with it and need my help, then I'm here, I just can't spend N months on it without reason.
<hkaiser> jbjnr: sure, I perfectly understand
<jbjnr> (getting a gpu version by april is not going to be easy)
<zao> jbjnr: We're troubleshooting RDMA at work. Is there anything in HPX one can use to put some load on it?
<jbjnr> cray or non?
<jbjnr> if cray, then yes, if not, then I need to fix the IBvers PP first
Matombo has joined #ste||ar
<hkaiser> jbjnr: fixing it would be a Good Thing (tm) anyways
<zao> Very non-Cray.
<jbjnr> yes
<zao> Alright, good to know.
<heller> jbjnr: you should focus on the good news!
<jbjnr> hkaiser: once we have the matrix work cleaned up, I'm hoping to be allwed to go back to network and do more rma stuff. Also fix verbs PP
<hkaiser> great
<jbjnr> good news?
<hkaiser> yes, good news
<hkaiser> for instance that the work we did in GB got awarded
<jbjnr> yes
<hkaiser> You should tell THAT to Thomas
<jbjnr> indeed
mcopik_ has quit [Ping timeout: 240 seconds]
parsa has joined #ste||ar
<heller> jbjnr: also that you match parsec in the single node case now
<jbjnr> ok, hwloc set mem interleaved now tested.
<jbjnr> no need to use numactl now
<jbjnr> 954 GFlop/s Thank you very much!
bikineev has joined #ste||ar
diehlpk_work has joined #ste||ar
<diehlpk_work> hkaiser, heller Do you want to do any changes for the paper?
aserio has joined #ste||ar
<hkaiser> diehlpk_work: I would like to give it a once-over today
<hkaiser> when is the deadline
<diehlpk_work> Submission Deadline: 2017 September 15, 23:59 AOE
<hkaiser> ok, so we still today
StefanLSU has joined #ste||ar
<aserio> hkaiser: will you be calling into the STORM meeting from home?
<hkaiser> yes
<aserio> parsa: Thought you would relate... http://www.smbc-comics.com/comic/likely-apocalypse
<parsa> :)))))))))
parsa has quit [Quit: Zzzzzzzzzzzz]
eschnett has quit [Quit: eschnett]
<zbyerly_> diehlpk_work, hkaiser deadline pushed to September 29, 23:59 AOE
<hkaiser> perfect!
<diehlpk_work> As usual
StefanLSU has quit [Quit: StefanLSU]
parsa has joined #ste||ar
StefanLSU has joined #ste||ar
eschnett has joined #ste||ar
StefanLSU has quit [Quit: StefanLSU]
<diehlpk_work> hkaiser, zbyerly heller What is our strategy now?
<diehlpk_work> Finish the paper until next Friday
<hkaiser> diehlpk_work: as said I would like to do a once-over before subission
<diehlpk_work> Sure, with the new dealdine you do not have it to do today
AnujSharma has quit [Ping timeout: 240 seconds]
StefanLSU has joined #ste||ar
pree has quit [Ping timeout: 248 seconds]
StefanLSU has quit [Quit: StefanLSU]
StefanLSU has joined #ste||ar
StefanLSU has quit [Client Quit]
hkaiser has quit [Quit: bye]
eschnett has quit [Quit: eschnett]
rod_t has joined #ste||ar
parsa has quit [Quit: Zzzzzzzzzzzz]
hkaiser has joined #ste||ar
StefanLSU has joined #ste||ar
StefanLSU has quit [Client Quit]
<hkaiser> heller: the CV change merged this morning broke buildbot
<github> [hpx] hkaiser pushed 1 new commit to master: https://git.io/v57oO
<github> hpx/master 2945dcd Hartmut Kaiser: Silencing MSVC warnings...
parsa has joined #ste||ar
<K-ballo> what was the warning?
bikineev has quit [Ping timeout: 260 seconds]
<hkaiser> default arguments on members defined out of clas are ignored - it was a benign and actually wrongly issued one
<hkaiser> there were no default arguments, the compiler just assumed to be smart after som epattern matching went wrong
parsa has quit [Ping timeout: 255 seconds]
hkaiser has quit [Read error: Connection reset by peer]
bibek_desktop_ has joined #ste||ar
bibek_desktop has quit [Remote host closed the connection]
EverYoung has joined #ste||ar
mcopik_ has joined #ste||ar
hkaiser has joined #ste||ar
david_pfander has quit [Ping timeout: 255 seconds]
parsa has joined #ste||ar
parsa has quit [Ping timeout: 246 seconds]
mcopik_ has quit [Ping timeout: 246 seconds]
eschnett has joined #ste||ar
pree has joined #ste||ar
pree has quit [Read error: Connection reset by peer]
eschnett has quit [Quit: eschnett]
Guest87225 has joined #ste||ar
Guest87225 is now known as patg
bikineev has joined #ste||ar
aserio has quit [Ping timeout: 246 seconds]
eschnett has joined #ste||ar
StefanLSU has joined #ste||ar
hkaiser has quit [Ping timeout: 246 seconds]
StefanLSU has quit [Quit: StefanLSU]
parsa has joined #ste||ar
parsa has quit [Ping timeout: 246 seconds]
<github> [hpx] hkaiser opened pull request #2906: Making sure generated performance counter names are correct (master...fixing_2905) https://git.io/v55eI
aserio has joined #ste||ar
parsa has joined #ste||ar
parsa has quit [Ping timeout: 252 seconds]
<github> [hpx] K-ballo force-pushed format from 691115c to 3bac096: https://git.io/v5zUg
<github> hpx/format c3385f8 Agustin K-ballo Berge: Wrap boost::format uses in traditional (variadic) function call syntax
<github> hpx/format 3bac096 Agustin K-ballo Berge: Add inspect check for unguarded boost::format usage
<github> [hpx] hkaiser created fixing_2890 (+1 new commit): https://git.io/v55Jp
<github> hpx/fixing_2890 dfcb9c4 Hartmut Kaiser: Force-delete remaining channel items on close...
mcopik_ has joined #ste||ar
parsa has joined #ste||ar
parsa has quit [Ping timeout: 255 seconds]
bibek_desktop_ has quit [Quit: Leaving]
bikineev has quit [Ping timeout: 240 seconds]
patg has quit [Quit: This computer has gone to sleep]
bibek_desktop has joined #ste||ar
mbremer has quit [Quit: Page closed]
akheir has joined #ste||ar
parsa has joined #ste||ar
eschnett has quit [Quit: eschnett]
<akheir> Does anybody know what is hpx::components::process::child::wait() ? what it does?
parsa has quit [Ping timeout: 246 seconds]
<zao> Hmm, don't see that one in the master tree.
hkaiser has joined #ste||ar
<akheir> hkaiser: what is hpx::components::process::child::wait() ? what it does?
<aserio> K-ballo: is (void)parameter a good way of silencing unused error warnings?
<hkaiser> it call wait_pid on linux
<hkaiser> aserio: yes
<aserio> hkaiser: thanks :)
<hkaiser> akheir: ^^
<akheir> hkaiser: if I use c.wait() for each process::child c created in the loop. the problem goes away
<hkaiser> akheir: ok, so when does it not work?
<akheir> hkaiser: otherwise. I wasn't sure if it is correct way to fix the problem
<hkaiser> akheir: missing move semantics?
<akheir> hmm?
<hkaiser> hold on
<hkaiser> akheir: the correct thing to do is to call child.wait_for_exit(), not .wait() - those are different things
<akheir> I do call that on the second loop when I try to check the processes
<hkaiser> ok, so where does it hang?
<akheir> but it never terminates
<akheir> yes
<akheir> it hang at wait_for_exit()
<hkaiser> ok
<hkaiser> can you give me access to the code?
<akheir> sure. just a sec
<hkaiser> k, will look in a sec
<akheir> with this I was able to launch 20 localities at a same time
diehlpk_work has quit [Quit: Leaving]
Matombo has quit [Read error: Connection reset by peer]
<hkaiser> akheir: if you call c.wait() everything works as expected?
<akheir> hkaiser: yes. I was able to run 20 processes at a same time
<hkaiser> ok, thanks
<hkaiser> that shouldn't be necessary - the latch should take of exactly this
<hkaiser> ahh
<hkaiser> akheir: could you try using a different name for the latch for each child?
bikineev has joined #ste||ar
wash has joined #ste||ar
<akheir> The thing is this c.wait() is in the first loop that launches all the processes. may be that changes the launch schedule
<akheir> hkaiser: sure
<hkaiser> distinct latch names may help
<github> [hpx] hkaiser pushed 1 new commit to master: https://git.io/v55Z5
<github> hpx/master 2f8b7b8 Hartmut Kaiser: Partially rolling back recent changes as those broke other platforms
<akheir> hkaiser: the latch name is't for launching process?
<akheir> what exactly it indicate?
<hkaiser> that latch name is for synchronizing the two processes
<hkaiser> it waits for the new hpx instance to be up and running
<akheir> hkaiser: aha, I will try that and let you know
<hkaiser> akheir: thanks!
<hkaiser> that may explain things
<akheir> hkaiser: btw, all the buildbot builds failed. have seen that?
<hkaiser> yah, see ^^ for a fix
<github> [hpx] hkaiser force-pushed fixing_2890 from dfcb9c4 to d74b49a: https://git.io/v55nK
<github> hpx/fixing_2890 d74b49a Hartmut Kaiser: Force-delete remaining channel items on close...
<github> [hpx] hkaiser force-pushed fixing_2905 from 8ac0bcf to 54031fc: https://git.io/v55n1
<github> hpx/fixing_2905 54031fc Hartmut Kaiser: Making sure generated performance counter names are correct...
<akheir> hkaiser: different latch name works perfectly. no need for c.wait() and it runs much faster since there is no wait
<hkaiser> akheir: cool
<zao> I wonder if I should force -lkvm on all FreeBSD binaries.
<akheir> hkaiser: I will push that and send the example to maciej
<hkaiser> would you mind sending an email to Marciek explaining what's wrong and what he needs to do in order to make his stuff work
<hkaiser> thanks!
<zao> Would simplify environ and argv shenanigans.
aserio has quit [Quit: aserio]
<hkaiser> zao: yah, thanks
<zao> How does the /runtime/memory/total perfcounter work on macOS? There's no read_total_mem_avail in that implementation file.
<zao> Rotted code?
<hkaiser> probably - nobody is looking at that code
<zao> Great, I get to find how to query that too on my platform then.
akheir has quit [Remote host closed the connection]
patg has joined #ste||ar
patg is now known as Guest21321
<K-ballo> aserio: if it's a parameter you can just comment out its name, but otherwise yes
Guest21321 has quit [Quit: This computer has gone to sleep]
Guest21321 has joined #ste||ar
bikineev has quit [Remote host closed the connection]
bikineev has joined #ste||ar
parsa has joined #ste||ar
Guest21321 has quit [Quit: This computer has gone to sleep]
Guest21321 has joined #ste||ar
Guest21321 is now known as patg
Matombo has joined #ste||ar
rod_t has quit [Quit: Textual IRC Client: www.textualapp.com]
parsa has quit [Quit: Zzzzzzzzzzzz]
quaz0r has quit [Ping timeout: 264 seconds]
quaz0r has joined #ste||ar
parsa has joined #ste||ar
zbyerly_ has quit [Ping timeout: 264 seconds]
K-ballo has quit [Quit: K-ballo]
K-ballo has joined #ste||ar