aserio changed the topic of #ste||ar to: STE||AR: Systems Technology, Emergent Parallelism, and Algorithm Research | stellar.cct.lsu.edu | HPX: A cure for performance impaired parallel applications | github.com/STEllAR-GROUP/hpx | Buildbot: http://rostam.cct.lsu.edu/ | Log: http://irclog.cct.lsu.edu/
eschnett has joined #ste||ar
bikineev has quit [Remote host closed the connection]
bikineev has joined #ste||ar
EverYoung has joined #ste||ar
EverYoun_ has quit [Ping timeout: 252 seconds]
bikineev has quit [Remote host closed the connection]
bikineev has joined #ste||ar
bikineev has quit [Ping timeout: 240 seconds]
StefanLSU has joined #ste||ar
mbremer_ has joined #ste||ar
EverYoung has quit [Ping timeout: 255 seconds]
EverYoung has joined #ste||ar
EverYoung has quit [Ping timeout: 240 seconds]
StefanLSU has quit [Quit: StefanLSU]
K-ballo has quit [Quit: K-ballo]
mbremer_ has quit [Quit: Page closed]
diehlpk has quit [Ping timeout: 260 seconds]
hkaiser has quit [Quit: bye]
StefanLSU has joined #ste||ar
StefanLSU has quit [Quit: StefanLSU]
rod_t has joined #ste||ar
StefanLSU has joined #ste||ar
StefanLSU has quit [Quit: StefanLSU]
StefanLSU has joined #ste||ar
EverYoung has joined #ste||ar
StefanLSU has quit [Quit: StefanLSU]
StefanLSU has joined #ste||ar
EverYoung has quit [Ping timeout: 255 seconds]
pree has joined #ste||ar
pree has quit [Ping timeout: 260 seconds]
StefanLSU has quit [Quit: StefanLSU]
StefanLSU has joined #ste||ar
pree has joined #ste||ar
pree has quit [Read error: Connection reset by peer]
pree has joined #ste||ar
pree has quit [Ping timeout: 240 seconds]
pree has joined #ste||ar
StefanLSU has quit [Quit: StefanLSU]
pree has quit [Ping timeout: 240 seconds]
rod_t has quit [Ping timeout: 264 seconds]
EverYoung has joined #ste||ar
EverYoung has quit [Ping timeout: 240 seconds]
jbjnr has joined #ste||ar
Matombo has joined #ste||ar
EverYoung has joined #ste||ar
bikineev has joined #ste||ar
EverYoung has quit [Ping timeout: 246 seconds]
Matombo has quit [Remote host closed the connection]
kxkamil has joined #ste||ar
<github> [hpx] StellarBot pushed 1 new commit to gh-pages: https://git.io/v5yl8
<github> hpx/gh-pages 5e60138 StellarBot: Updating docs
EverYoung has joined #ste||ar
<jbjnr> disturbing new trend in hpx applications. Ctrl-C frequently doesn't seem to kill them off properly. they just hand there forever until you kill from another teminal
<jbjnr> ^hang
EverYoung has quit [Ping timeout: 255 seconds]
<zao> Lovely.
bikineev has quit [Remote host closed the connection]
bikineev has joined #ste||ar
bikineev has quit [Remote host closed the connection]
bikineev has joined #ste||ar
mcopik has joined #ste||ar
bikineev has quit [Remote host closed the connection]
K-ballo has joined #ste||ar
hkaiser has joined #ste||ar
<heller> hkaiser: a leg to BR doesn't work out, sorry :/
<hkaiser> heller: sure, np
<hkaiser> heller: I suspected as much
<heller> it's already a stretch to fly over just for the award
<hkaiser> nod
<hkaiser> heller: but you better show up if they want to hand over the Nobel price
<jbjnr> the life of a rockstar - such hardship!
<jbjnr> hkaiser: what happened to par_task execution policy. I can't get parallel_task_policy to work. What's the right name these days
<heller> how about an extended weekend for you? with an extension to SF before flying to seattle?
<hkaiser> par(task)
<heller> hkaiser: hey, I am showing up to the award!
<jbjnr> doh!
<hkaiser> hah!
<heller> just can't visit you guys at the other end of the continent :/
<hkaiser> sure, understood
<hkaiser> I'll talk to my better half if she's up to a weekend in SF
<heller> if I ever win a noble price, I'll fly you into oslo, no problem
<heller> he ;)
<heller> I'll be flying back friday at 13:42 from SFO
<hkaiser> ahh, so no weekend after all, ok
<heller> sorry
<heller> tight schedule, life of a rockstar and so on
<jbjnr> lol
<hkaiser> I can see that, clearly
<heller> looking forward to two days in berkeley though!
<jbjnr> PS. just because you're rich and famous now - you can't skip crappy hpx tutorials in lugano btw.
<heller> i won't
<heller> also, I miss the rich part
<heller> where do I sign up for that?
pree has joined #ste||ar
<jbjnr> that happens later ...
<jbjnr> (maybe)
<zao> just get() the future to collect
pree has quit [Read error: Connection reset by peer]
<jbjnr> super!
<zao> or do you use channels for revenue streams?
<jbjnr> auto wealth = channel(richness).get()
<hkaiser> lol
pree has joined #ste||ar
bikineev has joined #ste||ar
bikineev has quit [Ping timeout: 260 seconds]
pree has quit [Read error: Connection reset by peer]
pree has joined #ste||ar
pree has quit [Ping timeout: 240 seconds]
pree has joined #ste||ar
<github> [hpx] hkaiser pushed 1 new commit to master: https://git.io/v5y1R
<github> hpx/master 04fd3c0 Hartmut Kaiser: Merge pull request #2897 from STEllAR-GROUP/fixing_2896...
<Guest81687> [hpx] hkaiser closed pull request #2894: Fix incorrect handling of compile definition with value 0 (master...cmake_fix) https://git.io/v5XfZ
diehlpk_work has joined #ste||ar
bikineev has joined #ste||ar
<jbjnr> zao: on reflection, I think it should be auto wealth = channel(fame).get()
<jbjnr> seems more type correct
<pree> lol
denis_blank has joined #ste||ar
denis_blank has quit [Client Quit]
thundergroudon[m has quit [Ping timeout: 240 seconds]
mcopik has quit [Ping timeout: 240 seconds]
taeguk[m] has quit [Ping timeout: 246 seconds]
bikineev has quit [Ping timeout: 264 seconds]
aserio has joined #ste||ar
pree has quit [Ping timeout: 240 seconds]
mcopik has joined #ste||ar
hkaiser has quit [Quit: bye]
bikineev has joined #ste||ar
thundergroudon[m has joined #ste||ar
taeguk[m] has joined #ste||ar
mcopik has quit [Ping timeout: 246 seconds]
<github> [hpx] aserio pushed 5 new commits to new_people: https://git.io/v5y5U
<github> hpx/new_people 202f485 mcopik: Fix incorrect handling of compile definition with value 0
<github> hpx/new_people d77f8d1 Thomas Heller: Fixing SLURM environment parsing...
<github> hpx/new_people 5c46da1 Hartmut Kaiser: Removing dependency on Boost.ICL
<heller> aserio: what are you doing?
<aserio> trying to update the branch I am working on
<aserio> heller: did I not do that right?
<heller> Ah, fast forward merge, forgot that those commits were on master already
EverYoung has joined #ste||ar
akheir has joined #ste||ar
hkaiser has joined #ste||ar
<github> [hpx] hkaiser closed pull request #2903: Documentation Updates-- Adding New People (master...new_people) https://git.io/v5D1u
<github> [hpx] hkaiser deleted new_people at 61f443f: https://git.io/v5yFi
EverYoung has quit [Ping timeout: 255 seconds]
<wash[m]> aserio: I'll be able to call in for the ste||ar call today
<K-ballo> "hkaiser deleted new_people"
<hkaiser> K-ballo: lol
<aserio> newbies watchout
<zao> ruh-roh
hkaiser has quit [Read error: Connection reset by peer]
<aserio> wash[m]: see you in a bit
bikineev has quit [Read error: No route to host]
bikineev has joined #ste||ar
aserio has quit [Ping timeout: 246 seconds]
khuck has joined #ste||ar
aserio has joined #ste||ar
<khuck> is there any documentation on how to build HPX with Intel 17 compilers? I am getting lots of errors/failures.
pree has joined #ste||ar
pree has quit [Client Quit]
<K-ballo> the build for intel 17 is broken on buildbot
<khuck> K-ballo: thanks. I guess I won't bother with it for now
<khuck> yeah, those are the same errors I see.
<khuck> has an issue/bug been submitted, or should I submit one?
<K-ballo> from a cursory look, it seems to be related to the unwrap/ped work
<K-ballo> I think this is the same error I ran into when.. trying to do... something?
<khuck> that code has been problematic
<K-ballo> poisonous substitutions during overload resolution
<K-ballo> some bad interaction with the remote dataflow
<K-ballo> ...what was I doing when I ran into it??
<zao> src/components/performance_counters/memory/CMakeFiles/memory_component.dir/memory.cpp.o:memory.cpp:function hpx::performance_counters::memory::register_counter_types(): error: undefined reference to 'hpx::performance_counters::memory::read_psm_virtual(bool)'
<K-ballo> ah yes, the shortcircuiting all_of/any_of traits
<zao> src/components/performance_counters/memory/CMakeFiles/memory_component.dir/memory.cpp.o:memory.cpp:function hpx::performance_counters::memory::register_counter_types(): error: undefined reference to 'hpx::performance_counters::memory::read_psm_resident(bool)'
<zao> windows, macosx, linux.
<zao> Feels like some platforms are missing there :D
hkaiser has joined #ste||ar
<khuck> OK, I submitted an issue - https://github.com/STEllAR-GROUP/hpx/issues/2904
<hkaiser> \o/
<hkaiser> khuck: we might not care sufficiently to fix things for an older compiler, though
<zao> Are these hpx::performance_counters::memory counters supposed to be mandatory?
<zao> Guessing that it'll break on FreeBSD if I get around building as well.
<khuck> hkaiser: it doesn't work for 18, either.
<hkaiser> zao: no, you don't need them
pree has joined #ste||ar
<hkaiser> khuck: braindead compilers - *sigh*
<hkaiser> zao: those didn't change in a long time
<khuck> hkaiser: if we don't care about Intel compilers w/r/t Phylanx, that's fine with me
<hkaiser> I personally definitely don't care - others might
<zao> Then I've missed some bloody platform define macro somewhere.
<K-ballo> eh, I suspect this is our issue
<K-ballo> assuming is the same issue I ran into on msvc/clang
<khuck> speaking of other brain-dead compilers... should I expect xlC on Power8 to work?
<K-ballo> bad overload sets
<hkaiser> K-ballo: expect it not to work, use clang there
<hkaiser> K-ballo: fixable?
<hkaiser> khuck: expect it not to work, use clang there
<hkaiser> sorr K-ballo bad highlighting again :/
<khuck> hkaiser: (thumbs up)
<hkaiser> K-ballo: do you think this is fixable?
<K-ballo> if it is what I suspect it is, it's definitively fixable.. I was having issues to decipher the lookup interactions between the local and the remote dataflow
parsa has joined #ste||ar
<khuck> hkaiser: I have clang 3.4.2 - is that sufficient?
<hkaiser> khuck: should be
<zao> I don't see how this has ever worked on FreeBSD, nor how it'd ever been excluded.
<zao> There's nothing conditional in the build as far as I can see.
<zao> Is it possible that these have never been built if you don't `install` or something.
<khuck> ?
<hkaiser> zao: no idea
<K-ballo> oh no, the emojis finally reached IRC
kxkamil has quit [Quit: Leaving]
Matombo has joined #ste||ar
EverYoung has joined #ste||ar
EverYoung has quit [Remote host closed the connection]
EverYoung has joined #ste||ar
bikineev has quit [Ping timeout: 248 seconds]
bikineev has joined #ste||ar
zbyerly_ has joined #ste||ar
<khuck> K-ballo: ?
<K-ballo> (ノ ゜Д゜)ノ ︵ ┻━┻
<khuck> heh
<zao> [3/4] Extracting boost-docs-1.64.0: 59%
<zao> Fast, this machine isn't.
<khuck> hkaiser: my clang is broken, trying with xlc 14. I expect pain.
<hkaiser> khuck: lot's of it ;)
<khuck> not looking good...
<khuck> -- Performing Test HPX_WITH_CXX17
<khuck> -- Performing Test HPX_WITH_CXX17 - Failed
<khuck> -- Performing Test HPX_WITH_CXX1Z
<khuck> -- Performing Test HPX_WITH_CXX1Z - Failed
<khuck> -- Performing Test HPX_WITH_CXX14
<khuck> -- Performing Test HPX_WITH_CXX14 - Failed
<khuck> -- Performing Test HPX_WITH_CXX1Y
<khuck> -- Performing Test HPX_WITH_CXX1Y - Success
<khuck> -- C++ mode used: C++1y
<hkaiser> xlc is the incarnation of a braindead C++ compiler
<zao> You're not wong about that :)
<zao> I gave up making Boost care about it long ago.
<zao> Around the time of retiring our last AIX box, probably.
parsa has quit [Quit: Zzzzzzzzzzzz]
hkaiser has quit [Quit: bye]
<diehlpk_work> heller, When dou you like to skype>
parsa has joined #ste||ar
<heller> diehlpk_work: give me a few minutes
<heller> diehlpk_work: I'll be available in hopefully an hour
<diehlpk_work> Sure, I am available the full day
aserio has quit [Ping timeout: 246 seconds]
eschnett has quit [Quit: eschnett]
hkaiser has joined #ste||ar
aserio has joined #ste||ar
mcopik has joined #ste||ar
bikineev has quit [Ping timeout: 248 seconds]
EverYoung has quit [Ping timeout: 246 seconds]
<zbyerly_> diehlpk_work, heller what's up?
EverYoung has joined #ste||ar
StefanLSU has joined #ste||ar
<diehlpk_work> zbyerly_, We planned a skype meeting today
<heller> yes
<zbyerly_> If you're talking about the paper, I am available
<heller> wash[m]: yt?
<heller> give me another 15 minutes please
<aserio> zbyerly_: how is the STORM report comming?
<zbyerly_> aserio, I am in writing mode so I will do it today
<aserio> Wonderful, I was hoping to send out what you did to Carola and Robert for their input
<zbyerly_> aserio, you mean like stuff related to real-time forecasting?
<zbyerly_> aserio, i wasn't planning to write anything about that
<aserio> zbyerly_: I want them to add what they accomplished this year
parsa has quit [Quit: Zzzzzzzzzzzz]
<heller> diehlpk_work: Kids won't sleep..
StefanLSU has quit [Quit: StefanLSU]
aserio has quit [Ping timeout: 246 seconds]
rod_t has joined #ste||ar
StefanLSU has joined #ste||ar
StefanLSU has quit [Quit: StefanLSU]
<zbyerly_> diehlpk_work, heller i'd like to be included if you guys are having a skype meeting
parsa has joined #ste||ar
zbyerly_4 has joined #ste||ar
<diehlpk_work> zbyerly, zbyerly_ Yes, I will let you know
<diehlpk_work> heller, I will be available from now on
zbyerly_4 has quit [Remote host closed the connection]
mcopik_ has joined #ste||ar
mcopik has quit [Read error: Connection reset by peer]
aserio has joined #ste||ar
<zbyerly_> aserio, sent you a document, let me know if it's in a format you can use
<github> [hpx] K-ballo force-pushed pack-short-circuit from dfc2e88 to 6a48e50: https://git.io/v762b
<github> hpx/pack-short-circuit 6a48e50 Agustin K-ballo Berge: Short-circuit all_of/any_of/none_of instantiations
<aserio> zbyerly_: I will let you know when I get it
<zao> lib/hpx/libhpx_memory.so.1.1.0 indeed fails to build on FreeBSD too.
<zao> Seems like that's not part of a 'tests' build.
<zao> Could've sworn I built an 'all' in the past.
<zao> But that may not be actually everything.
<zao> Yay for CMake.
<zao> Upside of things, I didn't screw up the port from FreeBSD.
<mbremer> Hi, I would like to run an HPX application with the SNC-4 configuration on the KNL nodes. Is HPX able to pin threads in a NUMA-aware way (presumably hwloc does some of the heavy lifting here)/ are there any settings I need to modify to do this?
<zbyerly_> mbremer, greetings. thank you for your interest in HPX. HPX does support NUMA-awareness. You can use command line flags to do this
<zao> I wonder how long a command line can be in a modern OS... IRIX limited to 4096 :)
<zbyerly_> mbremer, --hpx:numa-sensitive ?
<hkaiser> mbremer: not right now, there is some work being done currently on a branch, though
<zbyerly_> you can also use a locality for each numa domain
<hkaiser> mbremer: you'd have to do some juggling with --hpx:bind, look at the docs for more details
<zbyerly_> hkaiser, what does --hpx:numa-sensitive do?
eschnett has joined #ste||ar
<heller> well, in theory, SNC4 is supported out of the box regarding thread binding
<heller> however, thread pinning is only half of the game
<zao> I wonder if my KNLs work yet..
<zao> I kind of like them to come up when rebooted and not suddenly keel over.
<zao> But I may be spoiled.
<mbremer> @heller: what would be the other half?
<heller> as soon as #2891 is merged, I'll port the stream benchmark to use one pool per NUMA domain, that will give a nice boost there
<heller> mbremer: the other half is to executors and allocators to take care of NUMA placement
<zao> hkaiser: If I'm fixing a bunch of dumb FreeBSD stuff and also am porting to a FreeBSD derivative, would you like that in one or separate PRs?
<heller> mbremer: that is, making sure that the tasks only work on data which are placed on the same domain to avoid latencies and properly distribute the workload over the various NUMA domains
<hkaiser> zao: what's easier for you?
<zao> Gah, Makefile test assumes GNU make.
<zao> hkaiser: Either would work, probably fixing FreeBSD proper first.
<zao> This OS is great, make(1) doesn't handle the constructs used in /home/zao/stellar/hpx/tests/unit/build/src/Makefile
<zao> Can I skip that test completely based on OS in CMake?
<mbremer> @heller: Interesting. Are the executors then applications specific or is that something that can stay at the hpx level? Also a simple question: are the sub numa domains then exposed as localities themselves?
<zao> tests/unit/build/CMakeFiles/pkgconfig_build_dir_test.make_compile, I guess.
<zbyerly_> <zbyerly_> hkaiser, what does --hpx:numa-sensitive do?
<heller> mbremer: 1) application specific 2) No
<mbremer> Also will the memkind library play nice with jemalloc (or whatever allocator I decide to use)?
<heller> zbyerly: makes scheduler steal differently (not across numa domains if it equals 2, only on NUMA boundaries if it equals 1)
<heller> mbremer: memkind is built on top of jemalloc. we had problems with that in the past
<heller> mbremer: with that being said, I wasn't able to get satisfying performance with SNC4. best results were in quad mode
<mcopik_> has anyone seen in the past an HPX application to lock when termination_handler is called?
<heller> what's your motivation to go for SNC4?
<mbremer> That's what @hkaiser was saying. What kind of application was that? I'm running an unstructured grid at the moment, so I believe that I'm either latency or memory bound.
<heller> what makes you believe that?
<mbremer> Just wanted to see if it would make a difference/ if the SNC4 is treats the scheduler more nicely.
<heller> most unlikely
<mbremer> The lack of speed up I was seeing with Vc.
<mbremer> I wasn't getting the full register widths in speed up. e.g. 4x with AVX2
<heller> if you are bandwidth bound, the best to check is to boot up in quad-flat mode and use numactl to bind the memory to HBM
<mbremer> Maybe I'll try that. I was also reading that flat mode might help with latency as well, since you might avoid misses in the HBM
<heller> well, that doesn't mean you are membound. you might suffer from a bad instruction mix (that is non-vector instructions getting in the way)
<heller> possibly, yes
<heller> you only know after profiling
<heller> vtune should give you nice hints on what's wrong
<zbyerly_> mbremer, doesn't vtune give you a summary?
<mbremer> Sure, but I think you might need to collect a different analysis than just hotspots?
<mbremer> I'll look into it. It'll be worthwhile to do, before chasing down the whole flat(/SNC4) business.
<heller> mbremer: a good indicator of being memory bound is also to do a scaling run across different number of threads. If you see that scaling flattening out, you could either have too little parallelism (idle rate counter of HPX will tell you that) or you are bandwidth limited. Those are at least the two usual causes for such an outcome
<mbremer> I'll try that. I did one earlier, but in hindisght I think the problem size wasn't large enough.
<mbremer> Thanks @heller, @hkaiser, zbyerly_
<diehlpk_work> heller, zbyerly_ Should we try to skype now?
<zbyerly_> diehlpk_work, works for me
aserio has quit [Ping timeout: 246 seconds]
<heller> diehlpk_work: zbyerly: yes
<heller> will you initiate the call?
<diehlpk_work> Yes, I will call you in 5 minutes. Have to walk to the meeting room
<heller> zbyerly: ping
<zbyerly_> heller, i'm here
<heller> zbyerly: what's your skype ID?
<zbyerly_> zbyerly
<heller> k
diehlpk has joined #ste||ar
aserio has joined #ste||ar
<heller> zbyerly: get on the skype!
<diehlpk> zbyerly, We are callign you
pree has quit [Quit: AaBbCc]
<zbyerly_> heller, diehlpk i need to restart my browser
<diehlpk> Ok
<diehlpk> zbyerly_, Linux or Windows?
<diehlpk> Have you shutdown skype?
zbyerly_ has quit [Remote host closed the connection]
hkaiser has quit [Quit: bye]
diehlpk has quit [Ping timeout: 255 seconds]
aserio has quit [Read error: Connection reset by peer]
<heller> zbyerly: btw, were irma or harvey models run with HPX?
<heller> as in is HPX being used in production for cera?
aserio has joined #ste||ar
mcopik_ has quit [Ping timeout: 260 seconds]
eschnett has quit [Quit: eschnett]
hkaiser has joined #ste||ar
<hkaiser> rod_t: yt?
mcopik_ has joined #ste||ar
jaafar has joined #ste||ar
jaafar has quit [Remote host closed the connection]
akheir has quit [Remote host closed the connection]
<rod_t> hkaiser: yes, now!
<hkaiser> hey
<hkaiser> rod_t: do you know why the tests fail?
<hkaiser> do they create any output?
<hkaiser> rod_t: do the tests run at all? or di they just fail to load
<rod_t> let me check. I'll go ahead and create a gist of circleci outputs- all 7 tests pass on rostam.
<hkaiser> rod_t: well, I saw that
<hkaiser> rod_t: but why did the tests fail? what do they print ?
aserio has quit [Quit: aserio]
<rod_t> I haven't been able to figure this out yet, that's why I asked you and was hoping you could help.
<hkaiser> rod_t: if you log into the docker image and run them by hand?
<rod_t> on my local machine or on circleci? if on circleci, could you please remind me how I can do it?
<hkaiser> I'd try on the local machine first
<rod_t> sure, it'll take me a few minutes though.
<hkaiser> rod_t: no worries
EverYoun_ has joined #ste||ar
EverYoung has quit [Ping timeout: 246 seconds]
bikineev has joined #ste||ar
<rod_t> hkaiser: all segmentation fault! http://bit.ly/2h3pRpr
Matombo has quit [Remote host closed the connection]
Matombo has joined #ste||ar
<hkaiser> rod_t: not goos
<hkaiser> good*
<hkaiser> rod_t: do you do release builds there?
<hkaiser> or debug builds?
<hkaiser> rod_t: does it happen if you add -t1 on the command line as well
<rod_t> I did not set the build, so whichever is set by default.
<rod_t> like this?
<rod_t> root@9dd57b9bf818:~/phylanx/build# ./bin/add_operation_test -t1
<rod_t> {stack-trace}: Segmentation fault
<hkaiser> rod_t: you might need to specify CMAKE_BUILD_TYPE=Debug for the phylanx build, iirc, the hpx docker files has a HPX debug version
diehlpk_work has quit [Quit: Leaving]
<rod_t> I just built Phyalnx in debug mode (on the same docker image), no seg faults but tests still fail. I'll create a gist in a sec.
<hkaiser> thanks
<K-ballo> could it be a recursive instantiation? given that the error is sensitive to instantiation order
<hkaiser> K-ballo: you know better - I have not looked at this at all
<K-ballo> I've seen similar issues due to recursive instantiations on variant... the diagnostic depends on which node starts the recursion
<hkaiser> nasty
<K-ballo> either way, this has the smell of a "unspecified amounts of instantiations during overload resolution" issue
Matombo has quit [Remote host closed the connection]
Matombo has joined #ste||ar
Matombo has quit [Remote host closed the connection]
Matombo has joined #ste||ar
<hkaiser> rod_t: that is a different problem, isn't it?
<hkaiser> rod_t: here: /usr/bin/python3.5: can't open file '/root/phylanx/build/bin/hpxrun.py': [Errno 2] No such file or directory
bikineev has quit [Remote host closed the connection]
EverYoun_ has quit [Remote host closed the connection]
EverYoung has joined #ste||ar
<rod_t> hkaiser: is this from the most recent build?
<hkaiser> that's from the gist you posted above
<hkaiser> rod_t: [17:40] rod_t: http://bit.ly/2x2mQPk
EverYoung has quit [Ping timeout: 255 seconds]
<rod_t> hkaiser: just one sec, I'm looking into it. I think I know where I made a mistake.
EverYoung has joined #ste||ar
EverYoung has quit [Remote host closed the connection]
EverYoung has joined #ste||ar
<rod_t> hkaiser: my bad!!! I had forgotten to switch to circle-ci branch on my local image (sorry ?). all tests pass for the debug build but not the release!
<hkaiser> rod_t: that's expected
<rod_t> Thank you, finally got the green on circle-ci!
<hkaiser> you can't run a release build on top of a hpx debug build
<hkaiser> just need to add that to the phylanx cmake script ensuring that th ebuild types are the same
<hkaiser> but that's for later
<hkaiser> rod_t: great - so we can merge this now and get full testing of all branches - excellent
<rod_t> Glad it's finally working. sorry about the delay.
rod_t has quit [Quit: My MacBook has gone to sleep. ZZZzzz…]
<K-ballo> aaarg, lcos::local dataflow has a "forwarding" overload in namespace lcos
<K-ballo> er, in namespace hpx
<github> [hpx] brycelelbach pushed 1 new commit to master: https://git.io/v59fr
<github> hpx/master 3289bd3 Bryce Adelstein-Lelbach aka wash: Update my email and affiliation.
<washcuda> hkaiser: can one of you guys regenerate the email image? I don't know how you do that
<washcuda> I made the other changes
<hkaiser> washcuda: sure, we'll do that
rod_t has joined #ste||ar
diehlpk has joined #ste||ar