aserio changed the topic of #ste||ar to: STE||AR: Systems Technology, Emergent Parallelism, and Algorithm Research | stellar.cct.lsu.edu | HPX: A cure for performance impaired parallel applications | github.com/STEllAR-GROUP/hpx | Buildbot: http://rostam.cct.lsu.edu/ | Log: http://irclog.cct.lsu.edu/
<github> [hpx] hkaiser closed pull request #3098: Unbreak broadcast_wait_for_2822 test (master...unbreak_test) https://git.io/vNGOB
hkaiser has quit [Quit: bye]
K-ballo has quit [Quit: K-ballo]
mcopik has quit [Ping timeout: 256 seconds]
<github> [hpx] sithhell pushed 1 new commit to fix_stack_overhead: https://git.io/vNn7W
<github> hpx/fix_stack_overhead 1aee866 Thomas Heller: Factoring out thread stack config into new header
<github> [hpx] StellarBot pushed 1 new commit to gh-pages: https://git.io/vNnjs
<github> hpx/gh-pages 0469322 StellarBot: Updating docs
K-ballo has joined #ste||ar
hkaiser has joined #ste||ar
mcopik has joined #ste||ar
brjsp has joined #ste||ar
<brjsp> hello
<brjsp> i am working on fixing this issue
<brjsp> as i see, the problem is with unlock order
<brjsp> but could anyone explain the significance of “ignore_lock” to me?
<K-ballo> there's a mechanism that detects locks being held while suspending, that tells the mechanism to ignore the lock
<hkaiser> brjsp: hpx has a means of detecting whether locks are being held while a thread is suspended
<brjsp> So basically the following should fix #3068
<brjsp> util::ignore_all_while_checking ignore_lock;
<brjsp> std::unique_lock<mutex_type> l(mtx_);
<brjsp> util::unlock_guard<Lock> unlock(lock);
<brjsp> + std::lock_guard <std::unique_lock<mutex_type> > unlock_next
<brjsp> + (l, std::adopt_lock);
<brjsp> cond_.wait(l, ec);
<brjsp> @K-ballo you said earlier that because of a misunderstanding you have essentially two condition_variable_any
<brjsp> what should be done about hpx::lcos::local::condition_variable ?
<K-ballo> should be taken out and shoot at
<K-ballo> brjsp: in order to truly be a condition_variable, you mena?
<K-ballo> *mean
<brjsp> yeah, it totally duplicates std::condition_variable
<K-ballo> it doesn't, the mutex type ought to be `lcos::local::mutex`, not `spinlock`
<brjsp> in _any?
<K-ballo> no, _any needs even more changes
<K-ballo> or would need
<K-ballo> std::condition_variable operates on std::mutex
<K-ballo> lcos::local::condition_variable operates on lcos::local::spinlock rather than lcos::local::mutex
<K-ballo> the queues and mechanisms condition_variable (the non generic one) is meant to reuse are in mutex, a spinlock does not have those
<K-ballo> the generic one, on the other hand, may just as well stick to spinlock... possibly... but it ought to use dynamic memory to get some weird corner cases right, it's detailed on the paper I linked the other day I think
<K-ballo> or maybe it was a separate paper?
<github> [hpx] brjsp opened pull request #3100: Fix #3068 (condition_variable deadlock) (master...cv_deadlock_fix) https://git.io/vNcqP
mcopik has quit [Ping timeout: 255 seconds]
<K-ballo> how about I add util/format.cpp to each benchmark that doesn't link against hpx?
<hkaiser> fine
<brjsp> @hkaiser clang-format does not run for me on ubuntu 16.04
<brjsp> YAML:26:23: error: unknown key 'AlignEscapedNewlines'
<brjsp> AlignEscapedNewlines: Right
<brjsp> ^~~~~
<zao> I believe we depend on options defined in later versions.
<hkaiser> brjsp: yah, you need a newer clang-format
<hkaiser> v5 or better, iirc
<brjsp> should be ok now
<hkaiser> brjsp: will you remove the unneeded constructor as well, please?
<K-ballo> I didn't know we were doing clang-format already
<K-ballo> I've been formatting things manually all this time!!
<hkaiser> K-ballo: informally ;)
<hkaiser> it provides a good baseline
<hkaiser> especially for people not familiar with the styles used
<hkaiser> K-ballo: and FWIW, I mostly format things manually as well
<brjsp> @hkaiser yes
<hkaiser> brjsp: thanks
<heller_> woah ... according to my measurements, we "waste" around 2500 cycles by roundtripping to the scheduler
<K-ballo> adding format.cpp is a no go, it brings in the checks for hpx/boost version and so on
<heller_> so the only thing that's holding it back are the benchmarks or other unit tests not linking to libhpx before?
TenGumis has joined #ste||ar
<K-ballo> as far as I can see it's only a bunch of performance test
<K-ballo> 8 or 9
<K-ballo> I'll just move the implementation to headers
<K-ballo> 1.6.60
<hkaiser> K-ballo: nice catch
<hkaiser> the same might be in a different place
mcopik has joined #ste||ar
<hkaiser> no, it was just there
<github> [hpx] hkaiser force-pushed relaxed_atomic_operations from cac03f7 to d1a9e13: https://git.io/vNO03
<github> hpx/relaxed_atomic_operations d1a9e13 Hartmut Kaiser: Relax atomic operations on performance counter values
brjsp has quit [Quit: Page closed]
<github> [hpx] hkaiser closed pull request #3099: Fixing lock held during suspension in papi counter component (master...fix_papi_locking) https://git.io/vNG3b
<github> [hpx] hkaiser pushed 1 new commit to master: https://git.io/vNc00
<github> hpx/master 9d57a60 Hartmut Kaiser: Merge pull request #3099 from STEllAR-GROUP/fix_papi_locking...
<TenGumis> Hello. I chosen this issue as my first in this project because has "easy" tag ;) https://github.com/STEllAR-GROUP/hpx/issues/2980
<TenGumis> Can somebody tell me something more about that?
<TenGumis> Is victor-ludorum comment below this issue correct?
<hkaiser> TenGumis: you should talk to khuck (not here, currently) as he 'owns' APEX
<TenGumis> via email or ounder issue on github?
<TenGumis> under*
<hkaiser> TenGumis: as you like it
<TenGumis> Ok. Thank you.
<hkaiser> github might be the better option as everybody can follow, but it's your call
ct-clmsn has joined #ste||ar
TenGumis has quit [Ping timeout: 260 seconds]
Vir has quit [Ping timeout: 276 seconds]
Vir has joined #ste||ar
Smasher has quit [Remote host closed the connection]