aserio changed the topic of #ste||ar to: STE||AR: Systems Technology, Emergent Parallelism, and Algorithm Research | stellar.cct.lsu.edu | HPX: A cure for performance impaired parallel applications | github.com/STEllAR-GROUP/hpx | Buildbot: http://rostam.cct.lsu.edu/ | Log: http://irclog.cct.lsu.edu/
diehlpk_mobile has quit [Ping timeout: 246 seconds]
diehlpk_mobile has joined #ste||ar
EverYoun_ has quit [Remote host closed the connection]
EverYoung has joined #ste||ar
diehlpk_mobile2 has joined #ste||ar
diehlpk_mobile has quit [Ping timeout: 246 seconds]
EverYoung has quit [Remote host closed the connection]
EverYoung has joined #ste||ar
parsa has joined #ste||ar
diehlpk_mobile has joined #ste||ar
diehlpk_mobile2 has quit [Ping timeout: 260 seconds]
diehlpk_mobile2 has joined #ste||ar
diehlpk_mobile has quit [Read error: Connection reset by peer]
diehlpk_mobile2 has quit [Read error: Connection reset by peer]
diehlpk_mobile has joined #ste||ar
EverYoung has quit [Remote host closed the connection]
parsa has quit [Quit: Zzzzzzzzzzzz]
EverYoung has joined #ste||ar
EverYoung has quit [Ping timeout: 240 seconds]
diehlpk_mobile2 has joined #ste||ar
diehlpk_mobile has quit [Read error: Connection reset by peer]
diehlpk_mobile2 has quit [Read error: Connection reset by peer]
diehlpk_mobile has joined #ste||ar
diehlpk has joined #ste||ar
anushi has quit [Ping timeout: 240 seconds]
diehlpk_mobile has quit [Read error: Connection reset by peer]
diehlpk_mobile has joined #ste||ar
diehlpk_mobile has quit [Read error: Connection reset by peer]
diehlpk_mobile has joined #ste||ar
diehlpk_mobile2 has joined #ste||ar
diehlpk_mobile has quit [Read error: Connection reset by peer]
diehlpk_mobile2 has quit [Read error: Connection reset by peer]
diehlpk_mobile has joined #ste||ar
jaafar has quit [Ping timeout: 240 seconds]
diehlpk_mobile has quit [Read error: Connection reset by peer]
diehlpk_mobile has joined #ste||ar
diehlpk_mobile has quit [Read error: Connection reset by peer]
diehlpk_mobile has joined #ste||ar
EverYoung has joined #ste||ar
anushi has joined #ste||ar
<anushi> hkaiser: Can you please help me figure out why one case fails http://cdash.cscs.ch/index.php?project=hpx&date=2018-03-12&filtercount=1&field1=buildname/string&compare1=63&value1=3204-master
diehlpk_mobile has quit [Read error: Connection reset by peer]
EverYoung has quit [Ping timeout: 240 seconds]
<anushi> The only thing I could get is it exited with code 1, but not the file which has error
diehlpk_mobile has joined #ste||ar
<hkaiser> anushi: looks unrelated to your stuff
<anushi> One test is failing in my PR #3204
jaafar has joined #ste||ar
<hkaiser> yah, as I said, that looks unrelated
<anushi> Okay
diehlpk_mobile has quit [Read error: Connection reset by peer]
diehlpk_mobile has joined #ste||ar
diehlpk_mobile has quit [Client Quit]
jaafar has quit [Ping timeout: 246 seconds]
K-ballo has quit [Ping timeout: 248 seconds]
K-ballo has joined #ste||ar
diehlpk has quit [Ping timeout: 248 seconds]
jaafar has joined #ste||ar
jaafar has quit [Ping timeout: 246 seconds]
hkaiser has quit [Quit: bye]
parsa has joined #ste||ar
K-ballo has quit [Quit: K-ballo]
anushi_ has joined #ste||ar
anushi_ has quit [Client Quit]
anushi has quit [Ping timeout: 240 seconds]
Anushi1998 is now known as anushi
Anushi1998 has joined #ste||ar
pdales has quit [Ping timeout: 260 seconds]
Anushi1998 has quit [Ping timeout: 240 seconds]
pdales has joined #ste||ar
jbjnr has quit [Read error: Connection reset by peer]
jbjnr has joined #ste||ar
anushi has quit [Ping timeout: 245 seconds]
parsa has quit [Quit: Zzzzzzzzzzzz]
EverYoung has joined #ste||ar
EverYoung has quit [Ping timeout: 252 seconds]
pdales has left #ste||ar [#ste||ar]
nanashi55 has quit [Ping timeout: 248 seconds]
nanashi55 has joined #ste||ar
verganz has quit [Ping timeout: 260 seconds]
wash has quit [Ping timeout: 268 seconds]
wash has joined #ste||ar
anushi has joined #ste||ar
jaafar has joined #ste||ar
<heller_> jbjnr: ok, wait_or_add_new it is!
<heller_> jbjnr: just profiling my application as well ... and guess what ... wait_or_add_new!
<jbjnr> heller_: just got it. Going to work on that today. Got time for a 5 min chat before I start?
<heller_> sure
<jbjnr> google hangouts?
<heller_> whatever works for you
<heller_> give me 5 minutes please
<jbjnr> bp
<jbjnr> NP
sharonhsl has joined #ste||ar
sharonhsl has left #ste||ar [#ste||ar]
<heller_> jbjnr: alright
jaafar has quit [Ping timeout: 264 seconds]
david_pfander has joined #ste||ar
hkaiser has joined #ste||ar
<github> [hpx] sithhell created fix_libfabric_pp (+1 new commit): https://git.io/vxJMY
<github> hpx/fix_libfabric_pp 94fca3d Thomas Heller: Fixing Libfabric Parcelport...
<github> [hpx] sithhell opened pull request #3235: Fixing Libfabric Parcelport (master...fix_libfabric_pp) https://git.io/vxJM3
<github> [hpx] hkaiser closed pull request #3204: Changing std::rand() to a better inbuilt PRNG generator. (master...master) https://git.io/vAPXG
verganz has joined #ste||ar
victor_ludorum has joined #ste||ar
<jbjnr> heller_: I misunderstood something ...
<heller_> jbjnr: what?
<jbjnr> if we get rid of task_items and only have work_items - how do we tell the difference between tasks we can run and tasks that are not ready yet
<heller_> we don't and we don't have to
<heller_> well
<jbjnr> ok
<heller_> in other words
<heller_> the tasks that are not ready yet, are just not scheduled right away
<jbjnr> we just use set_thread_state ... to shchedule thigns
<heller_> yes
<heller_> that's already in place though
<jbjnr> ok
<hkaiser> heller_: have you seen my comments on #3226?
<jbjnr> hkaiser: you're up early. good day
<hkaiser> hey
<heller_> hkaiser: hmmm
<heller_> hkaiser: components::deleter should dispatch to the correct function
<hkaiser> sure, for all registered components
<hkaiser> the problem is that that component is not registered
<heller_> this should really give a different error than a segfault, I guess
<heller_> AFAICS, there's only a problem with memory_block
<heller_> yeah ...
<heller_> all other components are properly registered, or are not managed within the primary namespace
CaptainRubik has joined #ste||ar
<jbjnr> hkaiser: has anyone every got anything useful out of perf.counters like HPX_HAVE_THREAD_QUEUE_WAITTIME
<jbjnr> &friends
<hkaiser> Pat?
<jbjnr> I would like to remove them. they just complicate the code for no gain
<jbjnr> really?
<jbjnr> I also don't like min_tasks_to_steal_pending &friends
<hkaiser> jbjnr: those are disabled by default, aren't they
<jbjnr> well. IMHO the decision about whether to steal or not shoukld be made in the scheduler and those checks should not be in the thread_queue
<jbjnr> I'll work around it.
<hkaiser> jbjnr: I have o objections to streamlining this - those numbers are arbitrary at best anyways
<jbjnr> hkaiser: or heller_ what does "running" mean here. https://github.com/STEllAR-GROUP/hpx/blob/master/hpx/runtime/threads/policies/local_priority_queue_scheduler.hpp#L538 it is passed to the thread queue as the enable_stealing flag. Seems likee it's misnamed or misused
<jbjnr> (lines 584 and 593 it is used as the second param)
<hkaiser> don't remember :/
<hkaiser> could be a left-over
<simbergm> if running is false it should stop stealing (may be used for other purposes as well)
<simbergm> for example when suspending it gets set to false so that it doesn't keep taking work from other threads
<hkaiser> ahh yes
<jbjnr> simbergm: thanks
<jbjnr> the force is strong with you
victor_ludorum has quit [Quit: Page closed]
nikunj_ has joined #ste||ar
<github> [hpx] hkaiser pushed 1 new commit to fixing_3226: https://git.io/vxJbb
<github> hpx/fixing_3226 bc77492 Hartmut Kaiser: Adding missing component registry for memory_block
<github> [hpx] hkaiser opened pull request #3236: Fixing memory_block (master...fixing_3226) https://git.io/vxJbj
EverYoung has joined #ste||ar
K-ballo has joined #ste||ar
EverYoung has quit [Ping timeout: 276 seconds]
nikunj_ has quit [Quit: Page closed]
<jbjnr> heller_: or simbergm at the end of wait_or_add_new we do a call to "bool canexit = cleanup_terminated(true);" - is this something essential we must keep?
<jbjnr> (or is it checked somewhere else anyway)
<simbergm> jbjnr: I don't remember 100% but I would say it can be removed (with minor changes)
<simbergm> scheduling_loop.hpp:675 checks the result but then calls cleanup_terminated again, would be cleaner to separate the two
<simbergm> so that wait_or_add_new returns true if there is no more work to be done (and ignores terminated threads)
<simbergm> but then you're removing wait_or_add_new so...
<jbjnr> exactly
<simbergm> so scheduling_loop.hpp:675 could probably do completely without the if (wait_or_add_new...) and it looks like it would do the right thing
<hkaiser> \o/ removing code is fun!
<simbergm> then get_next_thread would only return false if there is no more work to do, so no need to check (almost) anything after that, just make sure terminated threads are cleaned up
<simbergm> yep, it sure is
<hkaiser> jbjnr: I might be able to come toyour workshop in June
<jbjnr> \o/
<jbjnr> we arranged it to be right after the C++ meeting, to encourage people like yourself
<hkaiser> you'll need to give me more information what's expected of me
<hkaiser> yah, that helps
<jbjnr> Chat on Friday if you are free
<hkaiser> ok
simbergm has quit [Ping timeout: 252 seconds]
CaptainRubik has quit [Quit: Page closed]
Anushi1998 has joined #ste||ar
<Anushi1998> Why serialization is added?When are we transmitting objects from one locality to another?
simbergm has joined #ste||ar
<hkaiser> serialization of a type is needed to be able to pass an instance of that type to an action (or to return it from one)
<zao> This is your friendly reminder that the indian term "a doubt" would be "a question" in the western world. ;)
<hkaiser> zao: heh
<K-ballo> this example seems to only operate within one locality, but it does invoke an action remotely on itself
<K-ballo> looks like it might just be a bad example../
<hkaiser> Anushi1998: we don't know at compile time whether a particular action will be invoked remotely or not, so we have to assume that it will be
<hkaiser> K-ballo: not surprising
<Anushi1998> hkaiser: Okay, thanks :)
<Anushi1998> zao: Sure!I will keep in mind :)
jakub_golinowski has joined #ste||ar
jakub_golinowski has quit [Client Quit]
mcopik has joined #ste||ar
<github> [hpx] sithhell created pipeline_example (+1 new commit): https://git.io/vxUk4
<github> hpx/pipeline_example 054103f Thomas Heller: Adding Pipeline example...
<github> [hpx] sithhell opened pull request #3237: Adding Pipeline example (master...pipeline_example) https://git.io/vxUkz
mcopik has quit [Ping timeout: 256 seconds]
mcopik_ has joined #ste||ar
<github> [hpx] biddisco created remove-schedulers (+3 new commits): https://git.io/vxULN
<github> hpx/remove-schedulers 954a5ca Mikael Simberg: Remove hierarchy, periodic priority and throttling schedulers
<github> hpx/remove-schedulers ed96973 Mikael Simberg: Clean up documentation for using schedulers
<github> hpx/remove-schedulers 0048169 Mikael Simberg: Remove ensure_hwloc function (leftover from making hwloc compulsory)
<K-ballo> woa, so much removing
<jbjnr> simbergm: I have rebased your remove_schedulers branch onto latest master and removed the conflict. aha. I pushed to stellar instead of your repo.
<jbjnr> fixed
<jbjnr> I want to kill off these schedulers asap, they make my cleanup harder!
parsa has joined #ste||ar
<K-ballo> what was the scheduler that relied on breaking boost.atomic?
<jbjnr> ???
<jbjnr> don't remember that one - should I look for it somehow?
<K-ballo> neh, I just wanted to know if it is one of the removed ones
<K-ballo> there was one scheduler that relied on atomics of non trivially copyable types
<K-ballo> so it had to keep using old boost::atomic, which did not diagnose
<K-ballo> and is the only reason we still keep boost.atomic around
<jbjnr> If I just search for boost::atomic in the schedulers ...
<K-ballo> you should find some lockfree deque_node
<K-ballo> but I just tried searching and could not identify anything
<jbjnr> simbergm: crap - I messed up the rebse of your branch - fixing it now
<hkaiser> K-ballo: ABP scheduler
<K-ballo> yes! that's the one
<jbjnr> ABP interesting. I use bits of that
<K-ballo> is it gone now?
<K-ballo> ouch
<hkaiser> K-ballo: there is a PR
<jbjnr> I use the abp_fifo/lifo in my scheduler
<hkaiser> #3186, looks like it does not touch the abp stuff, though :/
<jbjnr> take tasks from hot end, steal from cold end, to improve cache reuse
nikunj has joined #ste||ar
<jbjnr> that's the PR I just broke :)
<hkaiser> jbjnr: in my experience the added cost of the abp scheduler amortizes the cache benefits
eschnett has joined #ste||ar
<hkaiser> YMMV
<jbjnr> I have a flag for turining it on and off. timings will be given when I'm done with all this tweaking
<hkaiser> cool
<simbergm> jbjnr: thanks! I didn't realize it had conflicts, I can also do(/continue) the rebase if you want
<jbjnr> just pushed the fixed branch - if it passes pycicle tests, I'll merge it
<simbergm> okay, nice
<simbergm> but wait, are you still running pycicle with the cades config PR? I was running pycicle just for some PRs but your instance still gives bad results
heller_ has quit [Quit: http://quassel-irc.org - Chat comfortably. Anywhere.]
<github> [hpx] sithhell created readd_abort (+1 new commit): https://git.io/vxUYX
<github> hpx/readd_abort 98653ad Thomas Heller: Readding accidently removed std::abort...
<simbergm> jbjnr: btw, note that all schedulers on master use the deque backend now
heller_ has joined #ste||ar
<jbjnr> simbergm: good to know
<simbergm> the (non-abp) fifo backend could've stayed with a queue if it looks like it makes a difference
<simbergm> or rather could be changed back to use a queue
<simbergm> lifo needs deque though
<jbjnr> Was the only differnce a queue or a deque between abp and the others, I've forgotten now
<simbergm> pretty much
<jbjnr> pycicle seems broken
<simbergm> lifo: stack, fifo: queue, abp lifo and fifo: deque
<simbergm> it does
<jbjnr> but they all use deque now anyway - that's what you said yes
<jbjnr> ?
<simbergm> exactly
<jbjnr> does that mean the abp deque backend is not inside a big IFDEF ABP_STUFF any more
<jbjnr> (so we should alre remove the abp scheduler too - since it was always experimental anyway)
<simbergm> it is still inside an ifdef
<simbergm> but I guess it need not be
<jbjnr> if the others use it - why?
<K-ballo> does that mean they all rely on broken code not being diagnosed for its violation? as the ABP was?
<jbjnr> ssshh
<simbergm> uhm, perhaps
<simbergm> I didn't know there was a problem
<jbjnr> nor I
<github> [hpx] sithhell opened pull request #3238: Readding accidently removed std::abort (master...readd_abort) https://git.io/vxUO0
<K-ballo> that would explain why I couldn't pinpoint a single scheduler in a quick find
<K-ballo> the ABP code needs to be fixed sooner or later, anyhow
<K-ballo> as long as it is not being diagnosed, it will "just work" despite being undefined
<simbergm> so what is the problem there? briefly at least
<K-ballo> lockfree deque needs atomic of non-trivially-copyable type
<K-ballo> an atomic of a struct with like 6 atomic pointers inside, or something odd like that
<K-ballo> unfortunately I do not understand the code enough to replace it with a conforming implementation
<simbergm> :/
<simbergm> ok, sounds nasty
<K-ballo> it is, but again as long as it is not being diagnosed it will "just work"
<K-ballo> and we stick with boost.atomic because it does not diagnose (or at least hasn't so far)
<zao> Until it doesn't.
bentham400 has joined #ste||ar
<K-ballo> frankly, it's in a more stable condition broken as it is now, that it will be after we attempt to fix it :P
bentham400 has quit [Remote host closed the connection]
<heller_> we need the libcds port, clearly ;)
Anushi1998 has quit [Ping timeout: 245 seconds]
<jbjnr> thanks K-ballo - this gives us a very good use case for libcds in gsoc too.
hkaiser has quit [Quit: bye]
aserio has joined #ste||ar
<K-ballo> a quick test with std::atomic compiles core hpx
<K-ballo> let's try a full build
<github> [hpx] K-ballo created std-atomic (+1 new commit): https://git.io/vxUGE
<github> hpx/std-atomic 2e99d24 Agustin K-ballo Berge: Replace the last usages of boost::atomic
diehlpk_work has joined #ste||ar
sharonlam has joined #ste||ar
<sharonlam> diehlpk_work: what is the full name for EMU nodal discretization?
<diehlpk_work> sharonlam, I do not know
<diehlpk_work> EMU stands for a name of a code
EverYoung has joined #ste||ar
Anushi1998 has joined #ste||ar
<sharonlam> I see thx
<diehlpk_work> To all our GSoc students: Have you seen the note about working hours at the GSoc student faqs?
<diehlpk_work> How much time does GSoC participation take?
<diehlpk_work> You are expected to spend around 30+ hours a week working on your project during the 3 month coding period. If you already have an internship, another summer job, or plan to be gone on vacation for more than a week during that time, GSoC is not the right program for you this year.
hkaiser has joined #ste||ar
<sharonlam> yea I heard that people work full-time
<zao> 75% fulltime, nice.
aserio has quit [Ping timeout: 256 seconds]
aserio has joined #ste||ar
EverYoung has quit [Ping timeout: 256 seconds]
<verganz> according to my experience in conversation with GSoC'17 participant, he agreed with the mentor that during the exams he could spend less time on the project, but he could compensate this time after the examinations
<verganz> I am in the same situation, because I studying in same university, we have exams on June
<diehlpk_work> Yes, when you need to work less for one or two weeks, this is ok
<verganz> Ok, that's what I'm talking about
<jbjnr> tests.unit.parallel.executors.executor_parameters always fails on my laptop.
mcopik_ has quit [Read error: Connection reset by peer]
<jbjnr> is that a new thing or should I expect it?
<jbjnr> (It times out)
<sharonlam> diehlpk_work: what is the scope of the material for the peridynamics project? (homogeneous/brittle/density/etc.)
<diehlpk_work> verganz, please read my private message
<diehlpk_work> sharonlam, Just one easy material
<diehlpk_work> The focus is on the parallel computing and load balancing
<jbjnr> verganz: our experience has been that students frequntly say they'll work hard (after exams for example), but in reality can be quite relaxed about the definition of 'hard'
<sharonlam> yea it's really easy to lose the focus
<diehlpk_work> sharonlam, Implementing the PMB model is sufficient
<diehlpk_work> Adding new models is not complicated
<sharonlam> diehlpk_work: ahaa, so no need to worry about choosing delta and other parameters
Anushi1998 has quit [Ping timeout: 256 seconds]
<diehlpk_work> No, it is really on parallel computing
<diehlpk_work> You should understand the principle of this model
<diehlpk_work> So you know what you implement, but do not think about the mathematical issues
<sharonlam> great to know, I was pretty intimidated by the physics proof at first
<simbergm> jbjnr: I've seen that fail on rostam, but very rarely
<diehlpk_work> No, just understand the basic principle, like neighbor search and exchnage of forces
<simbergm> and I'm never sure if it's really that test that's broken or something else in hpx
<zao> I can't test anything, my machines are all offline thanks to ISP troubleshooting \o/
<sharonlam> diehlpk_work: yea you basically summed up the general picture. I'll start with ignoring the tiny functions enhancing the accuracy
<jbjnr> simbergm: ok. it fails every time for me, so it must be something fundamental
<jbjnr> I'll have a look when I get a moment
<jbjnr> just testing now that I've removed wait_or_add_new
<zao> Is that macOS or another OS?
<jbjnr> zao: linx laptop
<jbjnr> ^u
<sharonlam> does hpx provide mechanisms to do load balancing?
<jbjnr> sharonlam: of meshes/data? not directly. You can tell it to move something, but you have to decide what to move, we don't have a general purpose partitioning algorithm anywhere
<jbjnr> but if you want a summer project .... zoltan reimplemtned in hpx - now that would be awesome
<sharonlam> for example if I want to do a 1-D array addition, and partition it in different localities, how can I sum the subtotal with hpx?
mcopik has joined #ste||ar
<sharonlam> sorry if it's not clear or too naive, I'm really new to this field
galabc has joined #ste||ar
<galabc> Hi, I have a question, in the article http://stellar.cct.lsu.edu/pubs/khatami_espm2_2017.pdf for_each loops are used
<galabc> like in figure 3
<galabc> is for_each actually an algorithm from HPX? If it is, how is it different from HPX for_loop algorithm?
<zao> Both the SC++L and we have a for_each, IIRC.
<K-ballo> HPX's parallel for_each corresponds to C++17 parallel std::for_each
<K-ballo> for_loop is a proposed extension in a newer parallelism TS, I believe it keeps induction variables around?
<K-ballo> let's just say it is a lower level parallel loop construct
<galabc> ok smart executors were used as a execution policie for for_each loop in the article
<K-ballo> if I remember correctly, for_loop comes with its own mini DSEL and all
<galabc> do they also work on HPX for_loops?
<K-ballo> for_each is much much simpler, it just calls some function on each of the elements of a range
<galabc> Do the smart executors work on for_loops or only on for_each?
<verganz> diehlpk_work, thanks for comments, I got the idea
<verganz> should it be more theoretical description with kind mathematical modelling or it'd be better use some code insertions etc. in proposal?
<jbjnr> sharonlam: are you using the partitioned vector stuff? only hkaiser knows what's going on there. You'll want some kinf of reduce algorithm on top of thre partitioned vector, but I've never tried using that stuff.
<diehlpk_work> verganz, When you say you like to have collision or contact provided. Yopu have to explain which algorithm you will use
<diehlpk_work> Same for saying you provide boundary conditions, you have to say force or displacement condition...
<diehlpk_work> How are these implemented
<diehlpk_work> Everyone knows that one could implment things, but we are interested how things are done
<sharonlam> not really, I'm going to partition a grid of 2d/3d nodes and calculate their interactive forces. I was asking the question about vector addition just to get an idea of domain partition in hpx
<sharonlam> maybe I should think about the data structure of the grid first
eschnett has quit [Quit: eschnett]
<github> [hpx] msimberg pushed 2 new commits to master: https://git.io/vxUaq
<github> hpx/master 8aee503 nikunj: Fix #3124: Build hello world client through make tests.unit.build
<github> hpx/master a281166 Mikael Simberg: Merge pull request #3178 from NK-Nikunj/fix-#3124...
mbremer has joined #ste||ar
<mbremer> @hkaiser: yt?
EverYoung has joined #ste||ar
<hkaiser> mbremer: ready whenever you are
galabc has quit [Quit: Leaving]
aserio has quit [Quit: aserio]
mbremer has quit [Client Quit]
aserio has joined #ste||ar
sharonlam has left #ste||ar [#ste||ar]
K-ballo has quit [Ping timeout: 240 seconds]
EverYoung has quit [Remote host closed the connection]
jaafar has joined #ste||ar
EverYoung has joined #ste||ar
K-ballo has joined #ste||ar
<simbergm> zao: do you remember what compiler/version/boost etc you used when testing the migrate component test?
<zao> Clang 3.8-ish, Boost 1.65-something, on a container I think ran debian.
<zao> I don't have access to my logs nor my compile machine, thanks to my ISP :)
<zao> I can check in an hour or two.
<simbergm> okay, no worries, I'm just trying to figure out if I'm seeing patterns where there are none for that test (i.e. if it fails with some specific configuration)
<simbergm> I think I'm just seeing things
<simbergm> or imagining things
EverYoung has quit [Remote host closed the connection]
EverYoung has joined #ste||ar
<zao> IIRC, the image also has a GCC of some sort to test with.
EverYoun_ has joined #ste||ar
EverYoun_ has quit [Remote host closed the connection]
EverYoung has quit [Ping timeout: 276 seconds]
EverYoung has joined #ste||ar
<zao> Clang 3.8.1, Boost 1.65.1, -DHPX_WITH_CXX14=ON
<zao> hwloc and tcmalloc from system.
<simbergm> zao: thanks!
eschnett has joined #ste||ar
david_pfander has quit [Ping timeout: 240 seconds]
Smasher has joined #ste||ar
<github> [hpx] Anushi1998 opened pull request #3239: Changing std::rand() to a better inbuilt PRNG generator. (master...master) https://git.io/vxUQt
Anushi1998 has joined #ste||ar
<zao> simbergm: Any build variations you want me to look into?
Viraj has joined #ste||ar
Viraj has quit [Ping timeout: 260 seconds]
aserio has quit [Ping timeout: 240 seconds]
K-ballo has quit [Ping timeout: 268 seconds]
K-ballo has joined #ste||ar
mcopik_ has joined #ste||ar
mcopik has quit [Ping timeout: 240 seconds]
EverYoun_ has joined #ste||ar
mcopik_ has quit [Ping timeout: 246 seconds]
EverYoung has quit [Ping timeout: 256 seconds]
aserio has joined #ste||ar
parsa has quit [Quit: Zzzzzzzzzzzz]
eschnett has quit [Quit: eschnett]
anushi_ has joined #ste||ar
Anushi1998 has quit [Ping timeout: 252 seconds]
aserio has quit [Ping timeout: 264 seconds]
aserio has joined #ste||ar
aserio has quit [Read error: Connection reset by peer]
aserio has joined #ste||ar
eschnett has joined #ste||ar
eschnett has quit [Ping timeout: 256 seconds]
eschnett has joined #ste||ar
hkaiser has quit [Quit: bye]
aserio has quit [Ping timeout: 240 seconds]
<K-ballo> curiously enough the build breaks but not for the old reasons: https://circleci.com/gh/STEllAR-GROUP/hpx/9768
<K-ballo> and in other news, it seems there's something wrong with our -latomic detected cmake code
aserio has joined #ste||ar
L8QBO8andreagus has joined #ste||ar
L8QBO8andreagus has quit [Remote host closed the connection]
anushi_ has quit [Quit: Leaving]
anushi_ has joined #ste||ar
anushi_ is now known as Anushi1998
zao_ has joined #ste||ar
jakub_golinowski has joined #ste||ar
mcopik has joined #ste||ar
nikunj has quit [Ping timeout: 260 seconds]
Anushi1998 has quit [Ping timeout: 260 seconds]
<heller_> aserio: 19-20, right?
<aserio> heller_: the meeting is April 19th and 20th
<aserio> I gave you some wiggle room so that you could come in earlier if you wanted
<heller_> aserio: great
<heller_> aserio: my goal was to fly in sunday and leave saturday?
Anushi1998 has joined #ste||ar
<aserio> heller_: sounds good to mee
<heller_> aserio: sounds good?
<heller_> great
hkaiser has joined #ste||ar
parsa has joined #ste||ar
Anushi1998 has quit [Remote host closed the connection]
eschnett has quit [Quit: eschnett]
Smasher has quit [Ping timeout: 260 seconds]
Smasher has joined #ste||ar
Smasher has quit [Ping timeout: 240 seconds]
Smasher has joined #ste||ar
nikunj has joined #ste||ar
aserio has quit [Quit: aserio]
<diehlpk_work> heller_, hkaiser You could look aqt the article's outline. I have a draft in the goolge doc
parsa has quit [Quit: Zzzzzzzzzzzz]
jakub_golinowski has quit [Quit: Ex-Chat]
parsa has joined #ste||ar
jakub_golinowski has joined #ste||ar
parsa has quit [Client Quit]
<jbjnr> the distributed.tcp.partitioned_vector_*** tests all fail for me - is that a suprise or do they fail for everyone?
<zao> Haven't tried in a good while.
simbergm has quit [Ping timeout: 252 seconds]
nikunj has quit [Quit: Page closed]
<jbjnr> I suspect that all distributed tests will fail when run on the same node. the thread binding must be screwed
<jbjnr> but if they failed, we'd see it on the dashboard...
Smasher has quit [Remote host closed the connection]
<zao> Mem[||||||||||||||||||||||||||||||||||||||||||||15.3G/15.7G]
<zao> Swp[|||||||||||||||| 9.25G/31.9G]
<zao> Someone (me) seems to have removed the -j for building my tests.
<jakub_golinowski> ?
<zao> First run seems good, at least.
EverYoung has joined #ste||ar
EverYoun_ has quit [Ping timeout: 252 seconds]
EverYoung has quit [Ping timeout: 240 seconds]
<hkaiser> jaafar: shouldn't fail
<hkaiser> jbjnr: ^^
EverYoung has joined #ste||ar
EverYoun_ has joined #ste||ar
EverYoung has quit [Ping timeout: 240 seconds]
EverYoun_ has quit [Remote host closed the connection]
EverYoung has joined #ste||ar