aserio changed the topic of #ste||ar to: STE||AR: Systems Technology, Emergent Parallelism, and Algorithm Research | stellar.cct.lsu.edu | HPX: A cure for performance impaired parallel applications | github.com/STEllAR-GROUP/hpx | Buildbot: http://rostam.cct.lsu.edu/ | Log: http://irclog.cct.lsu.edu/
EverYoung has joined #ste||ar
EverYoung has quit [Ping timeout: 256 seconds]
EverYoung has joined #ste||ar
EverYoung has quit [Remote host closed the connection]
EverYoung has joined #ste||ar
EverYoung has quit [Ping timeout: 245 seconds]
daissgr has quit [Ping timeout: 265 seconds]
eschnett has joined #ste||ar
diehlpk has joined #ste||ar
mcopik has quit [Ping timeout: 240 seconds]
EverYoung has joined #ste||ar
EverYoung has quit [Ping timeout: 245 seconds]
diehlpk has quit [Ping timeout: 256 seconds]
K-ballo has quit [Quit: K-ballo]
eschnett has quit [Quit: eschnett]
Anushi1998 has joined #ste||ar
hkaiser has quit [Quit: bye]
Anushi1998 has quit [Quit: Leaving]
Anushi1998 has joined #ste||ar
EverYoung has joined #ste||ar
EverYoung has quit [Ping timeout: 245 seconds]
anushi_ has joined #ste||ar
Anushi1998 has quit [Ping timeout: 245 seconds]
anushi_ is now known as Anushi1998
EverYoung has joined #ste||ar
EverYoung has quit [Ping timeout: 245 seconds]
<simbergm> diehlpk_work: very nice regarding gsoc, congratulations!
<github> [hpx] sithhell force-pushed docker_image from defc097 to cbdbe66: https://git.io/vxFvP
<github> hpx/docker_image cbdbe66 Thomas Heller: Fixing Docker image creation...
<heller> simbergm: testing squash to build docker images now ... the perl script doesn't work anymore
<simbergm> heller: yep, hope it works out well
<heller> me too ... debugging this is very annoying
<Anushi1998> I tried cmake hpx with -HPX_WITH_DOCUMENTATION=true and also changing the default here https://github.com/STEllAR-GROUP/hpx/blob/master/CMakeLists.txt#L210-L212 but in configuration summary I am getting it OFF :(
<simbergm> hi Anushi1998, would you mind posting the configuration output? and did you start with a clean build folder?
<Anushi1998> No the folder was not clean!
<simbergm> some settings are cached and may lead to problems (might not be the case here)
Vir has quit [Quit: Konversation terminated!]
<simbergm> are you missing a D between - and HPX_WITH_DOCUMENTATION?
<Anushi1998> So sorry for that but now I am getting an error :(
<simbergm> no problem at all :) cmake is not exactly helpful with these kind of things?
<simbergm> without the ?
<Anushi1998> But I doubt why it was doing same when I changed default?
<simbergm> what system are you on? I've had trouble on ubuntu 16.04 and eventually found working versions using nix...
<Anushi1998> ubuntu 17
<simbergm> okay, output? :)
<simbergm> do you have all the dependencies?
david_pfander has joined #ste||ar
<Anushi1998> It is getting error on qucickbook
<Anushi1998> *quickbook
<simbergm> have you built it? (unfortunately my memory is very vague of how to build it)
<Anushi1998> one sec
<Anushi1998> Yes I remember doing this bjam Jamfile.v2
<Anushi1998> The output was found 2 targets and non more
<Anushi1998> I think it is still not installed properly! :(
<simbergm> you might need to help cmake by setting BOOSTQUICKBOOK_ROOT
<Anushi1998> okay
<Anushi1998> simbergm: thanks :)
<simbergm> no problem, let me know how it goes
<Anushi1998> Sure
<github> [hpx] sithhell force-pushed docker_image from cbdbe66 to e4e0f4b: https://git.io/vxFvP
<github> hpx/docker_image e4e0f4b Thomas Heller: Fixing Docker image creation...
Anushi1998 has quit [Quit: Going for lunch]
nikunj_ has joined #ste||ar
Anushi1998 has joined #ste||ar
<github> [hpx] sithhell force-pushed docker_image from e4e0f4b to 9a97ba9: https://git.io/vxFvP
<github> hpx/docker_image 9a97ba9 Thomas Heller: Fixing Docker image creation...
Anushi1998 has quit [Quit: Leaving]
Anushi1998 has joined #ste||ar
Anushi1998 has quit [Ping timeout: 245 seconds]
Anushi1998 has joined #ste||ar
<github> [hpx] StellarBot pushed 1 new commit to gh-pages: https://git.io/vxNeR
<github> hpx/gh-pages 803a60d StellarBot: Updating docs
Anushi1998 has quit [Quit: Leaving]
Anushi1998 has joined #ste||ar
mcopik has joined #ste||ar
eschnett has joined #ste||ar
<github> [hpx] sithhell force-pushed docker_image from b5ce016 to c30d850: https://git.io/vxFvP
<github> hpx/docker_image c30d850 Thomas Heller: Fixing Docker image creation...
K-ballo has joined #ste||ar
<github> [hpx] msimberg created msimberg-patch-1 (+1 new commit): https://git.io/vxNGY
<github> hpx/msimberg-patch-1 437fe00 Mikael Simberg: Disable set_area_membind_nodeset for OSX
Anushi1998 has quit [Ping timeout: 245 seconds]
<github> [hpx] msimberg opened pull request #3281: Disable set_area_membind_nodeset for OSX (master...msimberg-patch-1) https://git.io/vxNGB
hkaiser has joined #ste||ar
<hkaiser> heller: any luck?
<heller> hkaiser: not yet :/
<hkaiser> :-(
<github> [hpx] sithhell force-pushed docker_image from 53fcb9f to 790e722: https://git.io/vxFvP
<github> hpx/docker_image 790e722 Thomas Heller: Fixing Docker image creation...
<heller> trying something different now...
Anushi1998 has joined #ste||ar
<hkaiser> heller: so what's wrong?
<hkaiser> didn't all of that work before?
<heller> the docker-compile.pl script is horribly outdated and doesn't work with the circle v2 images anymore
<hkaiser> heller: let's get rid of it
<heller> that's what I am trying to do
<hkaiser> we've done it for phylanx
<heller> not reallt
<heller> y
EverYoung has joined #ste||ar
<heller> hkaiser: the phylanx image is huge and contains all the intermediate build files
<heller> hkaiser: 8.34 GB
<hkaiser> nod, that's an orthogonal issue, is it?
<heller> no, that's exactly the point of the docker-compile.pl script
<hkaiser> k
<hkaiser> not sure what went wrong then, the idea was to remove the script without increasing the image sizes... :/
<heller> each docker run command creates a new layer
<heller> when you do a docker commit, you commit all layers
EverYoung has quit [Ping timeout: 276 seconds]
<hkaiser> sure, the changes have 'inlined' all commands into one, iirc
<heller> which changes?
<hkaiser> the phylanx changes
<heller> sorry, I am not following you
<hkaiser> heller: I wanted to say that we did change the phylanx docker script by inlining all commands into one in order to reduce the size of the resulting image
<hkaiser> I'm not sure why the size of the resulting image is that large now
Anushi1998 has quit [Quit: Leaving]
eschnett has quit [Quit: eschnett]
<diehlpk_work> simbergm, Thank you.
mcopik has quit [Ping timeout: 268 seconds]
parsa has joined #ste||ar
parsa has quit [Client Quit]
parsa[w] has quit [Read error: Connection reset by peer]
parsa[w] has joined #ste||ar
<parsa[w]> heller: the size of the image was smaller because it was built in a different container. the binaries were mounted and copied into a new container and that new container was turned into an image
<parsa[w]> heller: that scripts doesn't do anything besides parsing the fake Dockerfile as a config. this would work for you: https://github.com/STEllAR-GROUP/phylanx/blob/master/.circleci/config.yml#L51-L52
<parsa[w]> script*
<hkaiser> parsa[w]: why is the phylanx image so large, then?
<parsa[w]> because it is built inside the same image
<parsa[w]> heller: you could improve it by merging this line and the one after it: https://github.com/STEllAR-GROUP/hpx/compare/master...docker_image#diff-6e72c6a06c3291e7fccf443ca341959cR10
<parsa[w]> but the short story is: every RUN command creates a layer and subsequent RUNs create new layers on top of it
<heller> *shrug*
<heller> ok ... why do I keep forgetting my docker password?
<zao> Maybe it doesn't want you back in? :)
<heller> obviously ;)
<heller> hkaiser: I just pushed a docker image for you guys to get going ...
<heller> hoping to have a permanent solution soonish
<hkaiser> heller: thanks a lot1
<heller> still uploading
diehlpk has joined #ste||ar
hkaiser has quit [Quit: bye]
<github> [hpx] sithhell force-pushed docker_image from 790e722 to 6045a92: https://git.io/vxFvP
<github> hpx/docker_image 6045a92 Thomas Heller: Fixing Docker image creation...
<parsa[w]> heller: yeah
<parsa[w]> it shows you the unnamed containers
<parsa[w]> which we don't reuse
<heller> so you are telling me that a phylanx build is 4.32GB big?
<parsa[w]> it's about 950MB on a Mac, no idea why that's big over there
hkaiser has joined #ste||ar
<heller> parsa[w]: that's what I thought, I think the intermediate build files are kept
<parsa[w]> but when i was experimenting my image was 5.5GB, so i don't see anything suspicious
<parsa[w]> maybe, but it's not worth investigating... just delete them in the same RUN after installing them to remove the extras or build it elsewhere and commit to a live image
<heller> the HPX image, with all components and examples is 3.4 GB big, btw
<hkaiser> heller: can I try the uploaded file now?
<heller> hkaiser: yes
<hkaiser> thanks, I'll kick off a bunch of PR builds, then
<heller> I think I have a permament solution now... let's see how https://circleci.com/workflow-run/bb2ce43c-c81b-44e8-a439-ae0f14187f7b goes
<heller> if that works now, we'll have it fully automated again by tomorrow
<parsa[w]> merge the line below with it
<heller> no it won't
<heller> this is a multi-stage docker image
<heller> and even if I'd merge the lines, it wouldn't help
<heller> docker layers only add up
<heller> meaning, the ADD in line 8 contributes to the size, the rm later on, will only remove the files logically, the size will still be the same
<parsa[w]> yes they add up... that's the point... the rm is done in a separate layer
<parsa[w]> heller: literally: " you need to remember to clean up any artifacts you don’t need before moving on to the next layer"
<heller> I don't care, only the second stage contributes to the total size
<parsa[w]> yes people have wanted squash forever... it's still experimental
<parsa[w]> doesn't matter if the size is not larger than before that's great
<heller> parsa[w]: to put it differently: the ADD layer is the big offender here. the RUN layer only installs the stuff, more or less, even if I'd squash the two RUN lines you mentioned, the total size will be the same.
<heller> I am not using squash. the multi-stage approach works just as good
<heller> without the shortcomings
<heller> "You can selectively copy artifacts from one stage to another, leaving behind everything you don’t want in the final image."
<heller> exactly what we need.
<parsa[w]> okay
<parsa[w]> heller: does that dockerfile work by itself now?
<heller> should, yes
<parsa[w]> it doesn't have the cmake stage
<heller> which cmake stage?
<heller> everything's there
<heller> we just copy files in the docker build step
<parsa[w]> okay
<heller> everything else is done in the docker environment from circleci
<heller> this docker run thingy was only needed in circle v1
<heller> where we used docker as a hack to cache the dependecies
<heller> in v2, docker is supported natively and we can use our docker image directly
<heller> much faster
<parsa[w]> for hpx builds, not for phylanx
<parsa[w]> i tried it
<heller> what exactly?
<parsa[w]> it made the process take 70 minutes instead of current 56 minutes
<heller> the "build the build docker environment" shouldn't be every time
<parsa[w]> heller: what? you can cache that stage?!
<parsa[w]> i mean the "spin up environment" took 4 minutes every time
<heller> you mean when you do the workflow approach?
<parsa[w]> yes
<heller> workflow and using the docker is orthogonal
<parsa[w]> it added extra 4 minutes to every part of my workflow, it's only 9 seconds it HPX's
<heller> there you go
<heller> parsa[w]: yes, because you are using the machine executor
<heller> parsa[w]: the /phylanx should go ...
<heller> directory
<heller> gtg
<parsa[w]> i am using the machine executor BECAUSE the using a docker image added 4 minutes
<heller> from https://circleci.com/docs/2.0/executor-types/ : start time: docker, instant; machine, 30-60 seconds
<parsa[w]> it's not instant: https://circleci.com/gh/STEllAR-GROUP/hpx/11809 35 seconds
<parsa[w]> but you have a small base image, i have to pull HPX's image which took longer
<parsa[w]> takes*
<heller> which is usually cached
<heller> the machine executor doesn't have docker layer caching
<parsa[w]> no! it wasn't cached automatically. unless i have to do something to make it do so
aserio has joined #ste||ar
diehlpk has quit [Ping timeout: 256 seconds]
daissgr has joined #ste||ar
jakub_golinowski has joined #ste||ar
<heller> parsa[w]: hu?
<parsa[w]> heller: i don't think circleci caches the hpx image, it fetches it everytime and it takes 4 minutes
aserio has quit [Ping timeout: 245 seconds]
<heller> You only need to fetch it once in any case, no?
<parsa[w]> yes
<heller> As said, workflows and the docker executor are two different things
aserio has joined #ste||ar
diehlpk has joined #ste||ar
<heller> So I'm not sure what you're trying to get at
<heller> Do you have a link to the build with the docker executor handy?
<parsa[w]> don't mind the command fails
<heller> Hmm, you didn't use the hpx build env
<parsa[w]> that's just a test
<parsa[w]> btw it should've used stellargroup/hpx:dev
<heller> It doesn't ;)
<parsa[w]> doesn't matter, the spin up environment part is the question... is there any way i can cache it?
<heller> Depends on the VM you run in
<heller> It sure does matter
<parsa[w]> what do you mean it matters? two subsequent build use the same exact image, and it's clearly not cached
<heller> They don't have to run on the same vm
<heller> What does matter is which image you use in your executor
<heller> And there's no need to run docker inside docker to build and test the stuff
<parsa[w]> i just remembered that phylanx:dev you see there is correct because i renamed it to phylanx:prerequisites
<heller> I'm talking about the image option in the config.yml
<parsa[w]> yes there is no reason to run docker inside docker. that was a test
<parsa[w]> what do you mean they don't have to run on the same VM?
<heller> circleci runs on the cloud
<heller> They have multiple machines, your builds, even if subsequent, might run anywhere
daissgr has quit [Ping timeout: 260 seconds]
<heller> In any case: the environment setup (which pulls some circle image) takes 3 minutes. The build the build image (which pulls the hpx image and some Debian packages) takes two minutes
EverYoung has joined #ste||ar
EverYoung has quit [Remote host closed the connection]
EverYoung has joined #ste||ar
david_pfander has quit [Ping timeout: 276 seconds]
diehlpk has quit [Ping timeout: 240 seconds]
EverYoung has quit [Remote host closed the connection]
EverYoung has joined #ste||ar
nikunj_ has quit [Ping timeout: 260 seconds]
EverYoun_ has joined #ste||ar
EverYoung has quit [Remote host closed the connection]
jakub_golinowski has quit [Remote host closed the connection]
aserio has quit [Ping timeout: 265 seconds]
jakub_golinowski has joined #ste||ar
<github> [hpx] sithhell pushed 1 new commit to docker_image: https://git.io/vxNb7
<github> hpx/docker_image 8649a33 Thomas Heller: Update config.yml
Smasher has joined #ste||ar
victor_ludorum has joined #ste||ar
<heller> hkaiser: did the new image work?
EverYoung has joined #ste||ar
EverYou__ has joined #ste||ar
EverYoun_ has quit [Ping timeout: 265 seconds]
EverYoun_ has joined #ste||ar
EverYoung has quit [Ping timeout: 240 seconds]
EverYou__ has quit [Ping timeout: 240 seconds]
EverYoun_ has quit [Remote host closed the connection]
Smasher has quit [Read error: Connection reset by peer]
Smasher has joined #ste||ar
Anushi1998 has joined #ste||ar
EverYoung has joined #ste||ar
EverYoung has quit [Remote host closed the connection]
Smasher has quit [Read error: Connection reset by peer]
Smasher has joined #ste||ar
EverYoung has joined #ste||ar
victor_ludorum has quit [Quit: Page closed]
aserio has joined #ste||ar
<hkaiser> heller: yes, thanks!
hkaiser has quit [Quit: bye]
hkaiser has joined #ste||ar
hkaiser has quit [Quit: bye]
Anushi1998 has quit [Quit: Leaving]
Anushi1998 has joined #ste||ar
EverYoun_ has joined #ste||ar
EverYoung has quit [Ping timeout: 276 seconds]
hkaiser has joined #ste||ar
aserio has quit [Ping timeout: 264 seconds]
anushi_ has joined #ste||ar
Anushi1998 has quit [Ping timeout: 255 seconds]
anushi has quit [Ping timeout: 240 seconds]
anushi_ is now known as Anushi1998
EverYoun_ has quit [Remote host closed the connection]
EverYoung has joined #ste||ar
EverYoung has quit [Remote host closed the connection]
EverYoung has joined #ste||ar
anushi has joined #ste||ar
aserio has joined #ste||ar
anushi has quit [Ping timeout: 255 seconds]
mcopik has joined #ste||ar
Antrix[m] has quit [*.net *.split]
Antrix[m] has joined #ste||ar
diehlpk has joined #ste||ar
anushi has joined #ste||ar
jakub_golinowski has quit [Quit: Ex-Chat]
EverYoun_ has joined #ste||ar
Anushi1998 has quit [Ping timeout: 240 seconds]
diehlpk has quit [Remote host closed the connection]
Anushi1998 has joined #ste||ar
jakub_golinowski has joined #ste||ar
EverYoung has quit [Ping timeout: 265 seconds]
diehlpk has joined #ste||ar
diehlpk has quit [Remote host closed the connection]
hkaiser has quit [Quit: bye]
jakub_golinowski has quit [Client Quit]
aserio has quit [Remote host closed the connection]
aserio has joined #ste||ar
aserio has quit [Quit: aserio]
hkaiser has joined #ste||ar
jbjnr_ has quit [Read error: Connection reset by peer]
Smasher has quit [Remote host closed the connection]
anushi has quit [Ping timeout: 260 seconds]
anushi has joined #ste||ar
EverYoun_ has quit [Remote host closed the connection]
Anushi1998 has quit [Quit: Leaving]
EverYoung has joined #ste||ar
EverYoun_ has joined #ste||ar
EverYoung has quit [Read error: Connection reset by peer]
EverYoung has joined #ste||ar
EverYoun_ has quit [Ping timeout: 276 seconds]
EverYoung has quit [Remote host closed the connection]
EverYoung has joined #ste||ar
EverYoung has quit [Ping timeout: 276 seconds]
EverYoung has joined #ste||ar
EverYoung has quit [Read error: Connection reset by peer]
EverYoung has joined #ste||ar