aserio changed the topic of #ste||ar to: STE||AR: Systems Technology, Emergent Parallelism, and Algorithm Research | stellar.cct.lsu.edu | HPX: A cure for performance impaired parallel applications | github.com/STEllAR-GROUP/hpx | Buildbot: http://rostam.cct.lsu.edu/ | Log: http://irclog.cct.lsu.edu/
diehlpk has quit [Ping timeout: 260 seconds]
nanashi55 has quit [Ping timeout: 240 seconds]
nanashi55 has joined #ste||ar
diehlpk has joined #ste||ar
mcopik has quit [Ping timeout: 268 seconds]
parsa has quit [Quit: Zzzzzzzzzzzz]
K-ballo has quit [Read error: Connection reset by peer]
parsa has joined #ste||ar
eschnett has quit [Quit: eschnett]
K-ballo has joined #ste||ar
K-ballo has quit [Quit: K-ballo]
hkaiser has quit [Quit: bye]
eschnett has joined #ste||ar
diehlpk has quit [Ping timeout: 256 seconds]
nanashi55 has quit [Ping timeout: 256 seconds]
nanashi55 has joined #ste||ar
nanashi55 has quit [Ping timeout: 240 seconds]
nanashi55 has joined #ste||ar
nanashi55 has quit [Ping timeout: 256 seconds]
nanashi55 has joined #ste||ar
<Antrix[m]> hkaiser: Sorry, I was asleep while you gave me this hint. What are OS-threads ? In the Hello World example, hello_world_worker prints output only if current worker_thread_num == desired. What is the significance of this ?
nanashi55 has quit [Ping timeout: 256 seconds]
nanashi55 has joined #ste||ar
EverYoung has joined #ste||ar
EverYoung has quit [Ping timeout: 265 seconds]
parsa has quit [Quit: Zzzzzzzzzzzz]
parsa has joined #ste||ar
jaafar has quit [Ping timeout: 264 seconds]
parsa has quit [Quit: Zzzzzzzzzzzz]
parsa has joined #ste||ar
CaptainRubik has joined #ste||ar
<CaptainRubik> Antrix[m] : Watch this https://www.youtube.com/watch?v=NToOo-T3Q3w . Very good resource for learning about hpx.
<simbergm> heller_: good morning
<heller_> simbergm: good morning
<simbergm> so for thread suspension, you think it'd be enough for allscale to have a suspend(thread_num, wait_for_queues=false)?
<simbergm> why do you think there is work scheduled on suspended threads?
<heller_> simbergm: because the schedule_thread functions don't check for disabled PUs
<simbergm> aha, good point
<simbergm> do they get called from anywhere but the scheduling_loop though?
<simbergm> (should be changed in any case)
<simbergm> and with num_thread = -1
<heller_> i think they are called when setting the thread state, yes
<simbergm> yep, they are
<simbergm> okay, nice
<simbergm> I'll change that
<simbergm> but you'd also like to not wait for empty queues?
<Antrix[m]> CaptainRubik: will watch this video.
parsa has quit [Quit: Zzzzzzzzzzzz]
<jbjnr> CaptainRubik: just in case you were not aware - the Biddiscombe in the video is me, and the other chap in the videos is heller_ . Hopefully, you knew that ....
<jbjnr> and Antrix[m] ^^
parsa has joined #ste||ar
david_pfander has joined #ste||ar
<heller_> simbergm: I tried, doesn't make a difference
Anushi1998 has quit [Ping timeout: 256 seconds]
parsa has quit [Quit: *yawn*]
anushi has joined #ste||ar
EverYoung has joined #ste||ar
EverYoung has quit [Ping timeout: 265 seconds]
<github> [hpx] StellarBot pushed 1 new commit to gh-pages: https://git.io/vA4tW
<github> hpx/gh-pages fc13e56 StellarBot: Updating docs
<CaptainRubik> @jbjnr : Yes, I know. :)
<jbjnr> CaptainRubik: So how would you feel about beginning an integration of libcds into hpx as a gsoc project instead of the more general one proposed of concurrent data structures?
<jbjnr> in some ways easier - but in others more tricky.
<CaptainRubik> yeah Tricky in the sense that I wont have the flexibility right.
<CaptainRubik> I am reading the mail you sent me earlier
<jbjnr> correct. There will be many small data types and things used in libcds that you will need to replace with the hpx equivalents
<jbjnr> once those are in place, then 'in theory' the rest of lbcds should slot into place
<jbjnr> of course, it will be a lot more work than that
<CaptainRubik> I would like to take this task. So should I also include time for understanding libcds in the proposal or is it expected to be done before submitting a proposal.
<CaptainRubik> Forgive my ignorance doing GSoC for the first time.
<CaptainRubik> Rest of work means- Documentation and testing right? Along with other changes to be made for running the code.
<jbjnr> well, Ideally before, but in reality we can't enforce that. The way it works is - as the gsoc deadline for submissions approaches - we get more and more interested students asking about projects, each will have to write a proposal and we rnk tham and pick the ones we think are best. If nobody else wants this project, then your chances are better - BUT we may not get many slots this year - we only asked for a few, and so we pick the N
<jbjnr> best students from all projects. So the more background research and preparation you do before writing your propsal, the better it will be, and the more likely you are to get picked.
<jbjnr> The ball is (as they say) 'in your court'
<jbjnr> yes. documentation and testing.
<jbjnr> testing is not easy for concurrent code. Races are generally random and hard to reproduce, so any testing infrastructure that libcds has should be preserved and integrated with hpx testing too.
<CaptainRubik> Alright then I will try to dive into libcds code and understand the workings.
<jbjnr> CaptainRubik: great. We are here to help, but ultimately, for gsoc - you will be doing the work - so it's better if you understand the project, know what you;re getting into and are enthusiastic about it - if you discover as much as you can before hand - there's a better chance that you'll pick a project you like, do well and make everyone happy :)
<jbjnr> (and if you don't like the idea of working on libcds, then you've got time to look for something else).
<jbjnr> BUT - multithreading experience looks very good on your CV, so when you are looking for a job, a big PR on the HPX project will help lots!
<CaptainRubik> I think my proposal would be a mixture of libcds code and some papers that I have looked into. I will do more research and contact again. Thanks. I will do my very best. :)
<CaptainRubik> One more thing. What time zone are you guys in?
<jbjnr> ok. Still plenty of time, I can help with ideas for the proposal, but spend a bit of time reading things and putting down ideas, then send a draft outline of the propsal, and I can comment on it.
<jbjnr> Swizerland - CET
<jbjnr> Switzerland - CET
<jbjnr> right now it is 11:37am
<CaptainRubik> Will do that. Thanks.
<jbjnr> later when the USA wakes up, hkaiser and others will join, but heller, myself, simberg, zao, etc we are in europe on that time zone
<CaptainRubik> Thanks.
quaz0r has quit [Quit: WeeChat 2.1-dev]
CaptainRubik has quit [Quit: Page closed]
quaz0r has joined #ste||ar
<anushi> Sorry for being late, but now I extended my ram to 8gb and 41%test passed only when running make test . Is it ok?
<heller_> no
<anushi> heller_, what should I do then, any suggestions would be helpful :)
<heller_> 1) what tests fail 2) what are the errors you observe 3) what is your configuration?
K-ballo has joined #ste||ar
<github> [hpx] msimberg closed pull request #3176: Fixed Documentation for "using_hpx_pkgconfig" (master...Fix-documentation) https://git.io/vACLr
<github> [hpx] msimberg closed pull request #3177: Removed allgather (master...remove_outdated_example) https://git.io/vAWgB
mcopik has joined #ste||ar
hkaiser has joined #ste||ar
nanashi55 has quit [Ping timeout: 276 seconds]
nanashi55 has joined #ste||ar
<Antrix[m]> jbjnr: What is the use of a promise ? It is pretty much just a variable which is not set yet. Some future might want it to get some value . Can we not do this using a simple variable which is exposed to all the localities ?
<Antrix[m]> hpx::lcos::(local::)promise is what I am talking about
<heller_> how would you signal completion?
<Antrix[m]> heller_: Oh I see, are is_ready, has_value and has_exception defined for promise ?
<heller_> promise is the "sending" part, future the receiving part
<Antrix[m]> We could file a default value which means that the variable is not set yet /
<heller_> the only use of a promise is to set the value/exception
<heller_> the only use of a futures is to get the value/exception
<Antrix[m]> Ok
<heller_> while a future might block until the value is ready, or have a continuation attached whenever the value got set
<Antrix[m]> future goes to a suspended state if promise not ready yet ?
<heller_> you can think of this shared state being something like a variant<Value, Exception>, an additional status enum and a condition variable on which you can block on
<heller_> yes
<Antrix[m]> Oh nice ? thanks. I am looking at your video right now btw : HPX intro part 2 ..
<jbjnr> yes Antrix[m] - sounds like you got it now. The cool,thing about a promise is that you can create one in thread A, extract a future from it, give it to some function on Thread B, and then thread B can pass it around to anyone and when someone calls .get() on it - they get whatever thread A has put into it (ot block if thread A hasn't done it yet).
<jbjnr> (I meant, pass the future around - not the promise)
Anushi1998 has joined #ste||ar
<Antrix[m]> jbjnr: I get it now. It gives way to a light weight dependency/synchrony for a future. We need not define a new future, just a promise. Is that correct ?
<Anushi1998> heller_: These test fail https://pastebin.com/d9jBUsCb
<hkaiser> Antrix[m]: why do you think you need a new promise
<hkaiser> ?
<hkaiser> there are several different 'asynchronous providers' in HPX, all can be used to produce a future
<Anushi1998> Also while running make -j5 there are no errors
<jbjnr> the promise/future really need eah other, you get the future from the promise, but they can be separated - that's what gives the threads a way to synchronize. What heller said above. Promise to set, future to get.
<heller_> Anushi1998: can you also paste LastTest.log somewhere, please?
<jbjnr> heller_: views on libcds?
<heller_> what are views on libcds?
<heller_> jbjnr: excellent progress though!
<jbjnr> I mean do you have any opinion on the merits of integrating libcds
<jbjnr> ^ ok. I see
<heller_> high merit
<jbjnr> lol
<hkaiser> jbjnr: I'm all for it
<jbjnr> good day hkaiser didn't realize you were up.
<hkaiser> the only caveat would be that this requires long-term commitment
<hkaiser> jbjnr: g'morning
<heller_> looks like the original libcds author is up for it
<hkaiser> just dropping stuff into the repo is not a solution
<jbjnr> yes. but if we start using libcds:lockfree::stask instead of boost::lockfree:stack etc etc (names made up), then we an provide that kind of committment
<jbjnr> ^stack
<hkaiser> ok
<jbjnr> concurrent maps, stacks, all sort of goodies in there.
<Antrix[m]> jbjnr: Oh Ok. I get it now.
<heller_> if I interpret the emails correctly, he'd be very happy to have it used inside of HPX
<jbjnr> that's how it looks
<jbjnr> I was not expecting a "yes" so soon!
apsknight has joined #ste||ar
<heller_> not sure if he'd abandon libcds 'proper'
<jbjnr> I assumed we'd have tobeg a bit more etc etc
<jbjnr> no libcds won't go anywhere, but if we could absorb it into a subrepo of hpx, it'd be awesome
<hkaiser> nod
<heller_> well, libcds as an implemention of hazard pointers etc. but with HPX, would fit our scheme nicely
<Anushi1998> heller_ : Sure, It is exceeding paste limit so I have shared on drive, https://drive.google.com/open?id=1GeaI94aAUCYdw50UO7YP83jU3T-FQX2j
<jbjnr> (fork libcds, hpxify the bits that need it, make it a submodule and then merge stuf across any time good fixes or releases are made into our hpx-fork)
<jbjnr> gtg bbiab
<heller_> Anushi1998: all your tests fail because you haven't built them
<heller_> Anushi1998: type "make tests"
<Anushi1998> heller_ : Thanks:)
anushi has quit [Ping timeout: 265 seconds]
anushi has joined #ste||ar
Anushi1998 has quit [Quit: Leaving]
diehlpk_work has joined #ste||ar
hkaiser has quit [Quit: bye]
EverYoung has joined #ste||ar
EverYoung has quit [Ping timeout: 240 seconds]
hkaiser[[m]] has joined #ste||ar
<jbjnr> hkaiser[m]: heller_ has anyone looked into this http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2017/p0668r1.html regarding the consequences for using HPX on Power architectures?
hkaiser[m] has quit [Ping timeout: 248 seconds]
<heller_> HPX works fine on power
<jbjnr> you think
<heller_> no complaint from IBM
<heller_> but yeah .... I think it works
<jbjnr> we have stuff that is failing on powerpc with races (not an hpx code) but has run fine on other architectures and finding this document now makes me suspicious
<heller_> could be
<heller_> x86 has pretty strong guarantees. I think almost all instructions are atomic to begin with
<heller_> power is a lot weaker there
<jbjnr> hence my question
<heller_> where you really have to ensure you got the right orderings
<jbjnr> this document is telling us that it doesn't matter - it says the c++ model is actually broken - even with the right ordering on powerpc
<heller_> my understanding is, if everything is memory_order_seq_cst, it should be good, yet more expensive than it could be
<heller_> yeah
<heller_> jbjnr: so, quick thing to check. does the code use "volatile" when it really meant to use atomic?
<jbjnr> no.
<heller_> you still see that from time to time ...
<heller_> jbjnr: "Code that consistently accesses each location only with memory_order_seq_cst, or only with weaker ordering, is not affected, and works correctly."
<jbjnr> good.
<heller_> "Whether or not such code occurs in practice depends on coding style. One reasonable coding style is to initially use only seq_cst operations, and then selectively weaken those that are performance critical; it does result in such cases. Even in such cases, it seems clear that the current compilation strategy does not result in frequent failures; this problem was discovered through careful theoretical analysis, not bug reports. It is unclear whether
<heller_> there is any real code that can fail as a result of the current mapping; it would require careful analysis of the use cases to determine whether the weaker ordering provided by the hardware is in fact sufficient for these use cases."
aserio has joined #ste||ar
<aserio> hkaiser[[m]]: Will you be joining the call from home
hkaiser[[m]] has quit [Ping timeout: 240 seconds]
aserio1 has joined #ste||ar
hkaiser[[m]] has joined #ste||ar
aserio has quit [Ping timeout: 252 seconds]
aserio1 is now known as aserio
hkaiser[[m]] has quit [Remote host closed the connection]
hkaiser[[m]] has joined #ste||ar
anushi has quit [Ping timeout: 255 seconds]
apsknight has quit [Quit: apsknight]
aserio has quit [Ping timeout: 276 seconds]
david_pfander1 has joined #ste||ar
<simbergm> for you to try out
<simbergm> it works but but I'm not terribly happy with it
<jbjnr> heller_: if you have a branch for the wait_or_add_new stuff I can play with, please let me know. I'm going to do some profiling tonight/tomorrow
Anushi1998 has joined #ste||ar
<jbjnr> and Id like to experiment.
Anushi1998 has quit [Client Quit]
anushi has joined #ste||ar
<simbergm> I was planning on removing the throttling scheduler but then I saw #2640. It seems IBM is on the allscale project so is the throttling scheduler really not needed anymore or are they in fact using it for something not allscale-related?
Anushi1998 has joined #ste||ar
<simbergm> probably heller_ ^
<diehlpk_work> simbergm, Could you answer this student, who is itnerested in Create Generic Histogram Performance Counter
<jbjnr> 2460 was merged a year ago...why the concern about it now?
jaafar has joined #ste||ar
<simbergm> jbjnr: just wasn't sure if IBM there is allscale or if someone else is using the throttling scheduler in which case I wouldn't want to remove it just like that.
<simbergm> diehlpk_work: yep, I'll do that
<jbjnr> k
<diehlpk_work> Thanks
Anushi1998 has quit [Quit: Leaving]
diehlpk has joined #ste||ar
<diehlpk> Would anyone mind when we change the default behavior for using all threads for the debug mode?
<diehlpk> I would prefer to use only one there.
diehlpk has quit [Quit: Leaving]
hkaiser[[m]] has quit [Ping timeout: 276 seconds]
hkaiser[[m]] has joined #ste||ar
<jbjnr> we would mind
<diehlpk_work> Ok, why?
<diehlpk_work> Debugging your application on your local machine is horrible. Each time you forgot to specify -t all threads are used and your machine freezes
<jbjnr> can you imagine the hilarity and confusion of newbies using hpx getting differnt results all the time and not knowing why when the switch between debug/release
hkaiser has joined #ste||ar
<jbjnr> my machine does not freeze!
aserio has joined #ste||ar
hkaiser[m] has joined #ste||ar
david_pfander1 has quit [Ping timeout: 268 seconds]
hkaiser has quit [Quit: bye]
hkaiser has joined #ste||ar
hkaiser[[m]] has quit [Ping timeout: 276 seconds]
<K-ballo> uh what? all threads in debug, one in release?
<diehlpk_work> No, one in debug and all in release
<K-ballo> oh well, just regular crazy then
<K-ballo> why would the machine freeze?
hkaiser[[m]] has joined #ste||ar
hkaiser[m] has quit [Ping timeout: 256 seconds]
<diehlpk_work> When I start my application using all threads, my complete window mamager freezes
<diehlpk_work> But we can keep it as it is
<diehlpk_work> Just wanted to ask for the opinion of others
CaptainRubik has joined #ste||ar
<simbergm> what should bulk_then_execute return? it seems to return a future<void> but I would expect a vector like bulk_async_execute
<simbergm> a vector of futures that is
<hkaiser> simbergm: yah
<hkaiser> does it return a future<void> now?
<Antrix[m]> hkaiser: I saw the introductory video of hpx and also read the examples documentation. I am unable to find a start for the python wrapper. Some help there ?
<hkaiser> the scripting repository I linked against th eother day?
<hkaiser> Antrix[m]: ^
<Antrix[m]> hkaiser: That was in lua, right ? There is no code for python yet
Smasher has joined #ste||ar
<hkaiser> correct
<Antrix[m]> hkaiser: I have never tried lua :P
<hkaiser> it's a fun little language
<hkaiser> Antrix[m]: I think python supports futures nowadays, so exposing the low level hpx api that allows to asynchronously spawn hpx threads and synchronize usin gthe returned future might be the first step
<Antrix[m]> I will try looking at the code for lua and try fiddling with pybind11
<Antrix[m]> hkaiser: we dont need pybind11 for this ?
<hkaiser> Antrix[m]: why not?
<Antrix[m]> just asking
<hkaiser> pybind11 allows to easily expose c++ functionality in python, so it will be needed
<aserio> heller_: yt?
aserio has quit [Quit: aserio]
<Antrix[m]> hkaiser: Ok,I will have to read pybind11 documentation then.
<K-ballo> Antrix[m]: where are you from?
<Antrix[m]> K-ballo: I am from India
diehlpk has joined #ste||ar
<hkaiser> Antrix[m]: you will have to my friend ;)
david_pfander has quit [Ping timeout: 276 seconds]
aserio has joined #ste||ar
<zao> [1439/1439] Linking CXX executable bin/partitioned_vector_transform_binary_test
<zao> real 29m24.027s
* zao hugs this machine
<hkaiser> ROFL
<zao> (time is for full base + tests, not just that single test)
<zao> (in case someone wondered :D)
* K-ballo was..
* hkaiser assumed it was just this test :/
EverYoung has joined #ste||ar
EverYoung has quit [Ping timeout: 276 seconds]
<heller_> aserio: what's up?
aserio has quit [Ping timeout: 276 seconds]
<zao> Time to see if my Ryzen explodes again. This thread is saddening - https://bugzilla.kernel.org/show_bug.cgi?id=196683
aserio has joined #ste||ar
bibek has joined #ste||ar
<heller_> aserio: what up?
<aserio> heller_: I was following up with you about the changes we made to checkpoint
<heller_> ok
<aserio> How was your Birthday?
<heller_> good
<heller_> day at the museum, cinema at night
<aserio> that sounds lovely :)
<heller_> yup
<heller_> the checkpointing now only preps the first layer, right?
<heller_> that is, if I checkpoint an object, which has a client as a member, it will blow up, I guess
<heller_> serialization is a bitch ;)
diehlpk has quit [Ping timeout: 276 seconds]
<aserio> heller_: shouldn't we leave the decision of what to do in that case to the user?
<aserio> Currently the user is responsible for providing a serialization function for all objects passed to checkpint
<heller_> how?
<aserio> *all classes
<aserio> The user has to tell checkpoint how to serialize the class
<heller_> your example/test doesn't show this
<CaptainRubik> Hi, Do we have hazard pointer implemented in HPX? AFAIK (from the video tutorials on HPX) it uses boost style shared_ptr/unique_ptr which maintain ref count. Am I right? How is the garbage collectioin handled currently?
<K-ballo> which garbage collection?
<CaptainRubik> I mean internally in hpx if any gc is required
<K-ballo> there's more than one way to interpret that question
<heller_> aserio: that's only the top level function, no? the members are "just" serialized then, aren't they?
<K-ballo> there is no implementation of hazard pointers in HPX
<aserio> heller_: but if I create a class which contains a client, I will have to provide a serialization function for the class
<CaptainRubik> ok so first task for concurrent data structure project should be to implement one. Since most containers that I looked into use hazard pointers.
<K-ballo> oh, I see where that was coming from now
<CaptainRubik> Sorry I should have provided the context :P
<heller_> aserio: consider: std::vector<client> cs; /*fill cs*/ save_checkpoint(archive, cs);
<heller_> aserio: this would lead for the single clients not going through the prep phase.
<heller_> unless I miss something
<aserio> No you are correct. I think atm serialization would throw an error
<heller_> aserio: the biggest underlying problem is that parcel serialilzation and checkpoint serialization are too conflated
<heller_> but not really compatible
<heller_> with the biggest problem being the GID splitting
<heller_> or well, and shallow vs. deep serialization of a client
<heller_> you try to give the same operation two different meanings
<heller_> brb
<K-ballo> remember circle ci 2.0?
<heller_> yes
<heller_> it falis compiling the tests with partitioned vector
CaptainRubik has quit [Ping timeout: 260 seconds]
<K-ballo> really? why? too long?
<heller_> K-ballo: too much memory consumption
<K-ballo> there's only one solution to that
<heller_> Which is?
<K-ballo> skip them
<aserio> heller_: if you try to send a vector of client over the wire what will happen
<Antrix[m]> hkaiser: Does hpx_init.cpp/.hpp represent hpx:init function call ?
<Antrix[m]> I have gotten a fair idea of pybind11. I want to start with trying to port functions such that I can write fibonacci example in python
<Antrix[m]> But the code base is confusing to look at. I have never contributed to orgs before
<Antrix[m]> Oh got it hpx/hpx_init_impl.hpp has hpx namespace with init defs
hkaiser has quit [Quit: bye]
hkaiser[[m]] has quit [Read error: Connection reset by peer]
hkaiser[[m]] has joined #ste||ar
<K-ballo> Antrix[m]: in your place, I'd start by exposing mock hpx facilities
<K-ballo> like future and promise, for instance, I'd implement some dummy ones with the same interfaces HPX has and export those
<Antrix[m]> K-ballo: Please elaborate ? What dummy ones are you talking about ?
hkaiser[[m]] has quit [Ping timeout: 256 seconds]
<Antrix[m]> K-ballo: sorry, I am a bit confused
<zao> So make some mock future/executor/whatever types that don't really call into C++ at all?
<K-ballo> no, something that calls into C++, but rather than calling actual HPX to call some dummy implementation
<Antrix[m]> So, for example I define some hpx::mock_add(int i, int j) which adds i and j. Port mock_add to python using pybind11 ?
<K-ballo> nevermind, I don't know how to explain it, I take it back
<K-ballo> Antrix[m]: which of the fibonacci examples are you looking at?
<Antrix[m]> fibonacci.cpp
<Antrix[m]> the very basic one
hkaiser[[m]] has joined #ste||ar
hkaiser has joined #ste||ar
apsknight has joined #ste||ar
<hkaiser> heller_: I think the problems you mention to aserio are caused by us conflating serialization with id-splitting
<hkaiser> heller_: those are not issues with checkpointing
aserio has quit [Quit: aserio]
hkaiser has quit [Quit: bye]
hkaiser has joined #ste||ar
EverYoung has joined #ste||ar
EverYoung has quit [Ping timeout: 240 seconds]
<zao> 164: 01.10000.01.100000.001111111111111111111.1000.01.100000.001.10000.00...............0000.00../tree/hpx/tests/unit/component/migrate_component.cpp(298): test 't1.get_data() == 42' failed in function 'auto test_migrate_busy_component2(hpx::id_type, hpx::id_type)::(anonymous class)::operator()() const': '0' != '42'
<zao> I need to set up individual running of tests, the "let's just timeout" ones are annoying.
apsknight has left #ste||ar [#ste||ar]
apsknight has joined #ste||ar
apsknight has quit [Quit: apsknight]
Smasher has quit [Remote host closed the connection]
<K-ballo> `struct first_argument` ?!?? why is that a thing?
<K-ballo> reflecting on "signatures" is never the right thing to do
mcopik has quit [Ping timeout: 248 seconds]
<hkaiser> zao: uhh, ohh - heller applied a 'fix' to migration justthe other day
<zao> 6/39 runs, thus far.
<zao> Also seeing a bunch of:
<zao> log-test-rwdi-35bd4d0599906e1e42717f07894e6bedec0cd5b9-1519063214.log:430: /tree/hpx/tests/unit/parallel/segmented_algorithms/partitioned_vector_handle_values.cpp(56): test '*it1 == *it2' failed in function 'void compare_vectors(const Vector &, const Vector &, boo
<zao> l) [Vector = std::vector<int, std::allocator<int> >]': '60' != '42'
<github> [hpx] K-ballo force-pushed fmtlib from 97ac1ae to 48b6cbf: https://git.io/vFHIw
<github> hpx/fmtlib 48b6cbf Agustin K-ballo Berge: (draft) Replace boost::format with custom sprintf-based implementation
<zao> This is commit... 35bd4d0599906e1e42717f07894e6bedec0cd5b9
<hkaiser> not good - so something was screwed up :/
<github> [hpx] K-ballo created unlock_guard (+1 new commit): https://git.io/vABz3
<github> hpx/unlock_guard 10d48f3 Agustin K-ballo Berge: Remove unused scoped_unlock, unlock_guard_try
<zao> hkaiser: Note that these runs end up timed out at 70s (my limit).
<zao> I _think_ it's timing out after this incident, but the only evidence I have to that is that it took a while from the error was printed in my log to that the timeout was reached and the next test ran.
<hkaiser> k
<hkaiser> K-ballo: don't we still use scoped_unlock?
<hkaiser> ahh, you renamed it
<K-ballo> kinda, no.. I replace those cases with unlock_guard
<hkaiser> what's the difference?
<K-ballo> apparently too many years ago I introduced unlock_guard without removing scoped_unlock, I suppose for backwards compatibility?
<hkaiser> ok
<K-ballo> I'm not entirely sure, it's from my very early days
<K-ballo> they were identical other than by name, and they share copyright
<hkaiser> interesting
<hkaiser> that would have been a riddle for future archeologists ;-)
EverYoung has joined #ste||ar
EverYoung has quit [Remote host closed the connection]