aserio changed the topic of #ste||ar to: STE||AR: Systems Technology, Emergent Parallelism, and Algorithm Research | stellar.cct.lsu.edu | HPX: A cure for performance impaired parallel applications | github.com/STEllAR-GROUP/hpx | Buildbot: http://rostam.cct.lsu.edu/ | Log: http://irclog.cct.lsu.edu/
wash[m] has quit [Ping timeout: 258 seconds]
wash[m] has joined #ste||ar
EverYoung has quit [Remote host closed the connection]
<K-ballo> hkaiser: mpark/variant fixed for msvc19.12
<K-ballo> also, there's a single header release.. but not up to date
EverYoung has joined #ste||ar
EverYoung has quit [Remote host closed the connection]
EverYoung has joined #ste||ar
kazbek has joined #ste||ar
<hkaiser> K-ballo: thanks!
<hkaiser> that's good - so it will work for non-c++17 as well
<K-ballo> we are working on yet another implementation, strictly for compatibility purposes
EverYoung has quit [Ping timeout: 246 seconds]
kazbek has quit [Ping timeout: 260 seconds]
mcopik has quit [Ping timeout: 248 seconds]
EverYoung has joined #ste||ar
EverYoung has quit [Ping timeout: 246 seconds]
eschnett has joined #ste||ar
phaustin has joined #ste||ar
phaustin has quit [Client Quit]
<github> [hpx] hkaiser created find_symbols (+1 new commit): https://git.io/vbueD
<github> hpx/find_symbols 0ea2ea3 Hartmut Kaiser: Change existing symbol_namespace::iterate to return all data instead of invoking a callback
<github> [hpx] hkaiser opened pull request #3072: Change existing symbol_namespace::iterate to return all data instead of invoking a callback (master...find_symbols) https://git.io/vbueF
K-ballo has quit [Quit: K-ballo]
eschnett has quit [Quit: eschnett]
parsa has quit [Quit: Zzzzzzzzzzzz]
hkaiser has quit [Quit: bye]
gedaj has quit [Ping timeout: 240 seconds]
gedaj has joined #ste||ar
gedaj has quit [Ping timeout: 248 seconds]
gedaj has joined #ste||ar
jaafar has quit [Ping timeout: 248 seconds]
<github> [hpx] sithhell closed pull request #3070: Fix dynamic_counters_loaded_1508 test by adding dependency to memory_… (master...fix_dynamic_counters_loaded_1508) https://git.io/vbEfE
<simbergm> heller: good morning, I see you just merged 3069, afaik it will not fix timeouts, but I will investigate if merging the suspension stuff is making some of the tests run longer
<jbjnr> aha. I see pycicle triggering a load of new builds....
<heller> simbergm: yeah, we'll see
<zao> I wonder how many of my test failures are genuine. There's a lot of them.
<heller> the one you posted last night?
<zao> Aye.
<heller> I just see failed, and no reason :/
<simbergm> morning jbjnr, do you know if 2987 has improved after the segfault branch was merged?
<zao> I've got logs too, those were just greps to get summaries.
<jbjnr> simbergm: 2987 - no idea. Can't really tell until I start writing buggy code and get segfaults with the wrong error
<simbergm> ok, fair enough
<simbergm> and zao, what about 2935? are you still seeing failures with those tests? tests.unit.lcos.distributed.tcp.async_continue_cb_colocated and tests.unit.lcos.distributed.tcp.async_cb_remote_client
<heller> simbergm: yes, this hasn't been fixed.
<heller> simbergm: I think I put my analysis in the comments, do they make sense to you?
<jbjnr> My main worry right now is the scan failures that are randomly (?) occuring
<heller> yeah
<heller> they seem to be caused by some hang
<heller> not sure what's happening there
<heller> I can't reproduce those on my machines
<simbergm> heller: yeah, makes sense, was just asking since a lot has changed since then, but then I take it you know that part hasn't been fixed :)
<zao> Reproduces reasonably well on my box, it seems.
<heller> simbergm: it's a problem with the test
<heller> simbergm: the assumptions are wrong
<jbjnr> heller: that's why it's my main worry - they are not reproducible, so not easily fixable, and highly symptomatic of some race or other dodgy bug deep inside hpx
<heller> yup
<heller> I suspect the executor changes
<zao> (note that I run a ton of tests at the same time, so timing is probably fortunately bad on my box)
<jbjnr> have they been merged? which ones are you referring to?
<heller> that there's some code path that we take now, which leads to that dodgy behavior
<jbjnr> ooh. I didn't realize they were merged already - that seems a bit casual to me
<heller> jbjnr: pycicle seems to have stopped recording the status updates
<jbjnr> heller: yes status is broken after change to add options to the build (clang/gcc + in future boost etc), path names are not consistent any more and I'm just doing the fix now
<heller> ah right
<jbjnr> making the scraping of results more robust so it can candle any set of paths
<heller> and can we please have just one runner doing inspect?
<jbjnr> shou;ld allow us to do any set of options and still work
<jbjnr> almost done
<heller> that'd be great!
<heller> #3071 will fix the debug builds...
<jbjnr> one runner doing inspect - I'll need to think about that - I moved inspect down the dashboard list so it was less annoying ...
<jbjnr> bbiab - go downstaris and ask about dashboar upgrade
<heller> jbjnr: yeah, hard to see the inspect failures from the dashboard though
<heller> but we are really close to a green dashboard now
<heller> the one on master and daint is related to apex
<heller> apex shouldn't be inspected
<heller> paper writing now ... two for ISC :/
<jbjnr> dashboard upgrade = february :(
<heller> buuhuu
<heller> by then, I have written my own dashboard with XML scraping...
<jbjnr> we can install dashboards elsewhere ...
<jbjnr> put one on rostam
<jbjnr> and shut down buildbot ...
<jbjnr> :)
<jbjnr> mind you. rostam is always so slow to respond - I'd rather have an old cdash at cscs than a new one at lsu
<heller> yeah ...
<heller> I think it might make sense to invest in a proper cloud server
<zao> What kind of storage requirements does CDash need?
<heller> where stellar-group.org points to, where we can put those services
<heller> zao: SQL database holding the results, I think you can set it up to purge the DB after some time or storage needs
<heller> zao: does your site offer hosting of web services?
<zao> Sadly not.
<zao> Was more thinking about what I might shove onto my Linode machine, but it's quite storage-constrained.
<heller> ok
<heller> might not work then
<heller> jbjnr: look! no failures!
<heller> at least for some runs ;)
<heller> I am also suspecting the hpxrun.py utility to be responsible of some of the timeouts...
<zao> heller: Did you see the tarball with logs above?
<heller> i have seen the link ;)
<zao> :)
<heller> very helpful though
<zao> Hrm, there's a lot of HPX(network_error) in there.
<heller> yeah, it's the stupid hpxrun.py script and the way cmake handles timeouts
<jbjnr> I vote we get rid of hpxrun altogether!
<jbjnr> always disliked it
<zao> Ah, so the next test starts before the previous timeout has been properly cleaned up?
<zao> Ah yes, logs validate that.
<heller> jbjnr: what's the alternative?
<heller> zao: yeah, more or less
<jbjnr> just use cmake the way everyreon else does it
<heller> sure, but how will that work in the presence of mpirun/srun/whatever?
<heller> or when you want to start two localities without having srun available?
<jbjnr> I've been running mpi tests with cmake/ctest since before you knew what cmake was!
<jbjnr> since 2004 at least with paraview/vtk etc
<heller> ok, please propose something. also something that works without having mpirun (or the like)
<jbjnr> we need to get rid of those 50 warnings ....
<jbjnr> spoiling my dashboard!
<heller> ;)
<heller> go for it
<jbjnr> simbergm: does your PR remove some of those 50?
<heller> jbjnr: I really like how the one PR with the build failure now stands out :D
<simbergm> jbjnr: nope :(
<jbjnr> heller: welcome to the dark side!
<simbergm> at least not all of them, I don't know what comes after those 50...
<simbergm> can you change the limit of 50 btw?
<jbjnr> once everything goes green, finding the red becomes easy
<heller> simbergm: you can iteratively improve ;)
<simbergm> heller: are you talking about me fixing the warnings? :)
<heller> yes
<jbjnr> OMG. I didn't realize that a limit was set
<heller> simbergm: I took your statement as you wanting to volunteer here
<simbergm> yeah, I started at the easy end...
<simbergm> I've been looking at them, just not sure what to do about some of them yet... so I guess that's me volunteering ;)
<jbjnr> I do not know how to change the 50 limit
<simbergm> shame :/ but hopefully we can get it down to 0 and then it won't matter
<jbjnr> lol
<simbergm> well, below 50 and then it won't matter...
<jbjnr> still lolling
<simbergm> let me dream!
<simbergm> most of the visible ones are deprecation warnings anyway...
<jbjnr> we can tell cdash to ignore some if they are not real using regex
<zao> Ugh... not sure if this is horribly bad or horribly clever.
<zao> Parsing out the available tests from `ctest -N` and running each in isolation with ctest -R.
<zao> Won't get me a nice big XML file to send anywhere, but at least isolates the runs :)
<jbjnr> some dashboards have 'known' errors or warning (like the crap that comes out of boost or MPI, and these can be amsed)
<jbjnr> ^masked
<simbergm> question is if we can distinguish between deprecation warnings coming from hpx and those coming from other libraries?
<jbjnr> ok, latest results came in, dashboard almost green!
<jbjnr> just that master build and scan results that are causing trouble.
<jbjnr> now I must start running MPI tests on daint
mcopik has joined #ste||ar
<jbjnr> simbergm: should I go ahead and merge 3056
<simbergm> jbjnr: if you want, I was just going to wait until buildbot finishes its current round
<simbergm> I'm 100% sure it won't break anything though :)
<jbjnr> buildbot?
<simbergm> mmh, buildbot
<github> [hpx] biddisco pushed 3 new commits to master: https://git.io/vbugB
<github> hpx/master 1f99e22 Mikael Simberg: Clean up some unused variables/unnecessary tests
<github> hpx/master 26485ed Mikael Simberg: Merge branch 'master' into fix-build-warnings
<github> hpx/master 0935b31 John Biddiscombe: Merge pull request #3056 from msimberg/fix-build-warnings...
<zao> ${sing_exec} /bin/bash -c "cd /tree/build-${flavor}; ctest -N | grep -E '^ Test #[[:digit:]]+: ' | cut -d' ' -f5" | parallel "${sing_exec} /bin/bash -c \"cd /tree/build-${flavor}; ctest -T test --timeout 70 -R \"^{}\\$\"\""
<zao> Lovely.
<jbjnr> wtf?
<zao> Extracting individual test names, and running each test in a network-isolated container in parallel.
<jbjnr> do you have real work to do by any chance?
<zao> Lunch!
<zao> Waiting for some bullshit ML/DL software to compile.
<zao> (so yes)
<jbjnr> k
<github> [hpx] StellarBot pushed 1 new commit to gh-pages: https://git.io/vbuaK
<github> hpx/gh-pages 8431225 StellarBot: Updating docs
<zao> jbjnr: Let's say it's job related in figuring out how to abuse Singularity :)
mcopik has quit [Ping timeout: 260 seconds]
parsa has joined #ste||ar
parsa has quit [Quit: Zzzzzzzzzzzz]
parsa has joined #ste||ar
K-ballo has joined #ste||ar
parsa has quit [Quit: Zzzzzzzzzzzz]
parsa has joined #ste||ar
hkaiser has joined #ste||ar
david_pfander has quit [Ping timeout: 260 seconds]
<zao> Bah, CDash needs hacks to run on FreeBSD, and latest attempt was 2.2.3.
eschnett has joined #ste||ar
<heller> php hacks?
<zao> I think so.
<zao> Didn't look closer.
<zao> (my web/ftp machine at home runs FreeBSD, and it's the one that would be most stable for me)
<jbjnr> seems pretty clear from the dashboard which PRs should be avoided now ...
<jbjnr> does hkaiser have any inside knwledge about why some of the scan algorithms randomly fail?
<zao> Aren't those randomly seeded?
<jbjnr> yes, but ...
<jbjnr> I se what you're getting at
<zao> Would be interesting to see if they fail given a particular seed.
<zao> If it's a scheduling/division problem or a problem with the nature of the data, causing funny pivots.
<zao> (or whatever they do)
<jbjnr> heller: scroll to bottom and look at status https://github.com/STEllAR-GROUP/hpx/pull/3042 (numbers are fake, just testing)
<jbjnr> the old two would now be removed
<jbjnr> zao: ^correct
<jbjnr> but improbable for the scan algoritms themselve
<jbjnr> they partition the same regardless of the seed - the seed is just for the random numbers in the vectors
<zao> Ah.
<jbjnr> but still worth checking
<hkaiser> jbjnr: no inside knowledge - we were trying to fix this for a long time
<jbjnr> you know I still have my own scan implementation - would it be worth resurrecting for experimenting?
<hkaiser> jbjnr: maybe
<hkaiser> jbjnr: wrt #3067
<hkaiser> jbjnr: sorry for getting back to a merged PR, but could you have a look anyways?
<jbjnr> updated PR
<hkaiser> ahh, thanks - didn't see that :/
<jbjnr> I hope you checked the dashboard today and are basking in the green glow of happiness
<hkaiser> :D
<jbjnr> well, green-ish
<hkaiser> I hate the dashboard display
<jbjnr> haters gonna hate
<hkaiser> it's so unintuitive - must have been created by a linux person
<jbjnr> but it helped us get the job done
<jbjnr> its not unintuitive at all - it shows exactly what you need to see
<jbjnr> which PRs are red, and which not
<hkaiser> I can't see anything there
<jbjnr> sort by build time on the right
<jbjnr> what's not clear
<hkaiser> jbjnr: I really appreciate what you're doing there - it's not your fault that the dashboard is crap
<jbjnr> <sigh>
<jbjnr> + <facepalm>
<hkaiser> lol
<hkaiser> is there a way do get to the github PR for a build from the dashboard?
<jbjnr> hmm. good question. not sure.
hkaiser has quit [Quit: bye]
<zao> jbjnr: Doesn't seem to be any randomness in the test itself, all I see are iota vectors.
<jbjnr> k
<zao> Unless I've misread something.
aserio has joined #ste||ar
<simbergm> jbjnr: pycicle/cdash links on github don't work at the moment, no?
<simbergm> the newest prs don't have any link...
<heller> jbjnr: getting better ;)
<jbjnr> simbergm: new PRs have no links, but I'm now done with the fixed PR status, so I'll re-enable them
<simbergm> great :)
<heller> jbjnr: what if daint runs more than one config?
<jbjnr> then there will be more than 1 ststus
<jbjnr> <- like you asked for!!! ->
<jbjnr> I hope
<heller> Good
<heller> Just wondering since the name don't reflect this
hkaiser has joined #ste||ar
<jbjnr> heller the name should be "daint-clang-6.0.0" or "daint-gcc-6.2.0" what's not clear?
<hkaiser> jbjnr: are all of your builds run in Release?
<jbjnr> at the moment yes, but now that I can handle more options in the build names, I can add release etc and different boosts etc etc
<jbjnr> tomorrow.
<jbjnr> :)
<hkaiser> jbjnr: I think running tests in release is not the right thing
<hkaiser> we miss out on many things by doing so
<jbjnr> jesus effing christ - I'm doing my best!
<hkaiser> jbjnr: I know
<hkaiser> do more ! ;)
<jbjnr> failure - when you best just isn't good enough :(
<hkaiser> jbjnr: as said, I'm very grateful for what you're doing
<hkaiser> we all appreciate that very much
<jbjnr> no you're not - you don't even look at the dashboard. you're still using buildbot I bet!
<hkaiser> I'll stop making comments, then
<hkaiser> I don't even look at buildbot ATM ;)
<jbjnr> hah! I don't believe it
<jbjnr> buildbot is like an ex-girlfriend that hangs around hoping
<hkaiser> ROFL
<hkaiser> buildbot gives us debug builds ;)
<aserio> jbjnr: Does the new interface save the build times?
<jbjnr> aserio: click on the test build name and it lists the steps with timing. I think that's what you're after
<aserio> jbjnr: There are graphs?!?!
<jbjnr> yes, but it needs a few weeks to build up data etc to make them useful
<aserio> \me has a new hero
* aserio has a new hero
<jbjnr> fail^
* aserio will one day live up to his hero
<K-ballo> jbjnr: to be fair, nobody ever looked at buildbot much
<jbjnr> I'll believe aserio and hkaiser when buildbot is shutdown and replaced with pycicle :)
<jbjnr> K-ballo: yes it was useless
<hkaiser> not quite - but it's so much inferior to what your doing!
<jbjnr> flattery.
<jbjnr> going for tea now ...
<simbergm> I think jbjnr is too far already, but would it not be worth it to make a decision if we want to use pycicle, or something else? or is pycicle here to stay now?
<heller> jbjnr: boost version, mpi? Release vs debug? Sanitizer?
<hkaiser> heller: stop bashing our hero!
<hkaiser> ;)
<heller> Fwiw, I'm strongly vouching to ditch buildbot in favor of pycicle
<heller> And yes, I looked at buildbot regularly
<simbergm> there was talk about using some 3rd party service as well, so before jbjnr dedicates all his time to pycicle we should decide that's what we want, and that some resources from rostam can be used for pycicle builds (buildbot doesn't have to disappear if there are enough resources there...)
<simbergm> but as I said, I think it's already far enough that pycicle is here to stay :) and great btw...
eschnett has quit [Quit: eschnett]
<heller> Yes, I think we can easily migrate the resources on rostam to pycicle next week or so
<heller> And add resources from FAU
<heller> This will give us excellent coverage for PRs and master
<simbergm> we can, but hkaiser, aserio etc have to agree to do it :)
<github> [hpx] hkaiser force-pushed VS2017_15_5 from 3d66bfd to 14237dd: https://git.io/vbBXP
<github> hpx/VS2017_15_5 14237dd Hartmut Kaiser: Adapt MSVC C++ mode handling to VS15.5
<simbergm> I hope they agree
<hkaiser> simbergm: nobody objects to this
<simbergm> perfect, I was just sensing a bit of resistance ;) sorry if that was misplaced
<hkaiser> just teasing jbjnr...
<heller> Look at it as constructive feedback ;)
<simbergm> hkaiser, aserio: about https://stellar-group.github.io/hpx/, it wasn't as easy as I thought to update that
<simbergm> you were half-right about it being updated automatically, problem is that it only updates the docs but not index.html
<simbergm> who knows something about how that's set up?
eschnett has joined #ste||ar
<heller> simbergm: once simbergm and aserio agree, I can take care of buildbot on rostam. I have full access there. I suggest to do it slowly though
<simbergm> hkaiser and aserio I hope? :)
<heller> simbergm: I implemented it
<hkaiser> heller: you might want to at least talk to Alireza as well
<heller> hkaiser: sure
<simbergm> stellarbot = the docs builder on rostam?
aserio1 has joined #ste||ar
<heller> Yes
akheir has joined #ste||ar
<hkaiser> heller: yah the docs builder would probably need special care
<heller> Yes
<hkaiser> don't want to lose that one
<heller> And inspect is suboptimal as it is
<hkaiser> loose even
<simbergm> okay, do you mind if I try removing the index.html? it might get created automatically from the readme on master, but it's not clear from the documentation
<hkaiser> simbergm: pls go ahead
<simbergm> otherwise the docs builder needs to copy the readme from master to gh-pages
<simbergm> thanks
<heller> I suggest to have those on circleci, actually
<hkaiser> k
<hkaiser> fine by me
<hkaiser> and we need to run the header tests
aserio has quit [Ping timeout: 255 seconds]
aserio1 is now known as aserio
<heller> hkaiser: right, having those for each config doesn't hurt, i think
rod_t has joined #ste||ar
<hkaiser> heller: shrug
<gedaj> /quit'
<gedaj> /quit
gedaj has quit [Quit: leaving]
<heller> The good thing about pycicle is that we can run it on any service now...
<heller> Maybe even resurrect windows builders?
hkaiser has quit [Quit: bye]
akheir has quit [Remote host closed the connection]
aserio has quit [Ping timeout: 240 seconds]
akheir has joined #ste||ar
Smasher has joined #ste||ar
mcopik has joined #ste||ar
aserio has joined #ste||ar
aserio has quit [Ping timeout: 250 seconds]
aserio has joined #ste||ar
hkaiser has joined #ste||ar
parsa[[w]] has joined #ste||ar
parsa[w] has quit [Ping timeout: 250 seconds]
<akheir> heller: let's talk about the switching from builldbot to pycicle some times. I pretty much like to be involved
<akheir> heller jbjnr is there anywhere I could learn about pycicle? I like to know how it works and it needs
<akheir> heller: Ithere is absolutely no problem for you take the initiative and take care of stuff on rostam, I just want to be sure everything is documented and reproduce-able
<jbjnr> akheir: there's not docs yet, but I'll write something up. I think I'll put pycicle into it's own git repo and add a readme there
<jbjnr> it's a single python script of 300 lines an a couple of cmake test drivers - really not much
aserio has quit [Ping timeout: 250 seconds]
<heller> jbjnr: having it in its own repo makes sense
<heller> akheir: sure thing
<heller> akheir: I have to play around with it first. I don't know more than you do
<akheir> heller: thanks, let me know what you need on rostam
<akheir> jbjnr: thanks
akheir has quit [Remote host closed the connection]
aserio has joined #ste||ar
<jbjnr> heller: simbergm pycicle github status setting is now enabled again by default. all should go back to auto building again
<jbjnr> (I shut it down for a few hours whilst I was testing the new stuff and fixing a few things)
<heller> jbjnr: so all good for adding new machines tomorrow?
<hkaiser> heller: I'd ask you to let the doc builder running on buildbot until we have a replacementbuildbot
eschnett has quit [Quit: eschnett]
<heller> hkaiser: sure. I'm not going to shut down stuff without replacement
<heller> hkaiser: all of this has to be a smooth transition anyways..
<heller> hkaiser: I'm going to experiment with pycicle on my local machines first before touching rostam
<jbjnr> heller: yes. should be ok. I'm writing docs now
<heller> Nice!
<jbjnr> you will need to fix bugs etc. I have added some new ones for you to find
<heller> Great!
<heller> Finding bugs in other people's code is my passion!
<heller> And then there's the UI issue... But that's secondary I guess
<heller> jbjnr: you should look at Sphinx and readthedocs for documentation
<heller> (would be something nice for hpx as well)
eschnett has joined #ste||ar
eschnett_ has joined #ste||ar
eschnett has quit [Ping timeout: 248 seconds]
eschnett_ is now known as eschnett
mcopik has quit [Ping timeout: 248 seconds]
quaz0r has quit [Ping timeout: 248 seconds]
taeguk[m] has quit [Ping timeout: 248 seconds]
mcopik has joined #ste||ar
taeguk[m] has joined #ste||ar
quaz0r has joined #ste||ar
<jbjnr> heller: docs. yes. HPX could do with a serious upgrade.
<hkaiser> aserio: yt?
<aserio> hkaiser: yes
<hkaiser> we started talking with Jeanine
<aserio> Skype?
<hkaiser> it's in the email
<hkaiser> see pm
patg[[w]] has joined #ste||ar
parsa[[w]] is now known as parsa[w]
sourojit has joined #ste||ar
rod_t has quit [Quit: Textual IRC Client: www.textualapp.com]
jaafar has joined #ste||ar
sourojit has quit [Quit: Page closed]
<patg[[w]]> aserio: yt?
<aserio> patg[[w]]: yes
<patg[[w]]> aserio: I'm done with the paper, as always I will have to put it through a lab review and need to add their canned statement
<aserio> Fine by me
<patg[[w]]> Let me know when you think its at a stage to do that, also Bryce is listed as lbl
<patg[[w]]> aserio: I had a couple of comments and fixed the reference thing
<aserio> Please let Keven know as well
<aserio> Cool
<patg[[w]]> Ok sending an email
<aserio> have you pushed your changes?
<patg[[w]]> yes
<aserio> :)
<patg[[w]]> aserio: please remind me to to the review thing when you think you are close
jaafar_ has joined #ste||ar
jaafar has quit [Ping timeout: 255 seconds]
<aserio> Can we not start that now?
<patg[[w]]> aserio: yes
<aserio> patg[[w]]: We might want to do that now as the Journal will require us to make changes anyway
<aserio> (I assume)
<aserio> Thanks for working on the Latex stuff
<aserio> hkaiser: you still there?
<hkaiser> yah
<hkaiser> aserio: ^^
<aserio> How do you point a new project to the hpx sln
<hkaiser> what project? phylanx?
Smasher has quit [Remote host closed the connection]
<aserio> No one I am makeing
<hkaiser> use cmake as usual
<aserio> It cant find the dlls
<aserio> when I go to run it
<hkaiser> did it find the header files for compilation?
<parsa[w]> either copy the dlls or run it in hpx's build directory
<hkaiser> easiest is to start your application from inside the HPX bin folder
<aserio> "The code execution cannot proceed because hpx_iostreams.dll was not found."
<parsa[w]> ^
<aserio> why can't it find them? Do I have to pass another CMake variable?
<aserio> Or do I have to run from inside the bin folder
<hkaiser> parsa[w]: are you at cct?
<parsa[w]> i am
<hkaiser> yes, run it from the bin folder
<hkaiser> can you go over to help aserio?
<parsa[w]> no... he's locked himself in his room
<aserio> parsa[w]: knock and the door will be opened to you
<hkaiser> lol
<parsa[w]> too late i'm back at my desk now
<zao> In general on Windows, DLLs are found via the PATH environment variable and an additional set of predefined locations (CWD, system directory, according to manifests)
<zao> If you're dealing with say Visual Studio, the easiest way there tends to be to augment your PATH and then start devenv.exe
<aserio> Thanks guys
aserio has quit [Quit: aserio]
<jbjnr> heller: hkaiser et al https://github.com/biddisco/pycicle
<jbjnr> simbergm: ^^
<jbjnr> please for ward to akheir
<hkaiser> thanks jbjnr
<jbjnr> going to bed now. Will check it still works tomorrow!
parsa has quit [Quit: Zzzzzzzzzzzz]
patg[[w]] has quit [Quit: Leaving]