hkaiser changed the topic of #ste||ar to: STE||AR: Systems Technology, Emergent Parallelism, and Algorithm Research | stellar.cct.lsu.edu | HPX: A cure for performance impaired parallel applications | github.com/STEllAR-GROUP/hpx | Buildbot: http://rostam.cct.lsu.edu/ | Log: http://irclog.cct.lsu.edu/ | GSoC: https://github.com/STEllAR-GROUP/hpx/wiki/Google-Summer-of-Code-%28GSoC%29-2020
wate123__ has quit [Remote host closed the connection]
nikunj has quit [Read error: Connection reset by peer]
nikunj97 has quit [Ping timeout: 256 seconds]
nikunj has joined #ste||ar
wate123__ has joined #ste||ar
nikunj has quit [Ping timeout: 258 seconds]
nikunj has joined #ste||ar
nikunj has quit [Ping timeout: 252 seconds]
wate123__ has quit [Remote host closed the connection]
wate123_Jun has joined #ste||ar
hkaiser has quit [Quit: bye]
nikunj has joined #ste||ar
wate123_Jun has quit [Remote host closed the connection]
wate123_Jun has joined #ste||ar
Amy1 has quit [Ping timeout: 256 seconds]
wate123_Jun has quit [Ping timeout: 272 seconds]
Amy1 has joined #ste||ar
wate123_Jun has joined #ste||ar
wate123_Jun has quit [Ping timeout: 252 seconds]
iti has joined #ste||ar
iti has quit [Ping timeout: 265 seconds]
iti has joined #ste||ar
iti has quit [Ping timeout: 246 seconds]
wate123_Jun has joined #ste||ar
wate123_Jun has quit [Ping timeout: 252 seconds]
Pranavug has joined #ste||ar
Pranavug has quit [Client Quit]
wate123_Jun has joined #ste||ar
wate123_Jun has quit [Ping timeout: 252 seconds]
Abhishek09 has joined #ste||ar
wate123_Jun has joined #ste||ar
wate123_Jun has quit [Ping timeout: 252 seconds]
Abhishek09 has quit [Remote host closed the connection]
wate123_Jun has joined #ste||ar
<simbergm> zao: if and once you get around to building phylanx once more, is BOOST_ROOT set in your environment? and if yes, can you try changing the minimum required cmake version by phylanx to 3.13?
wate123_Jun has quit [Ping timeout: 252 seconds]
wate123_Jun has joined #ste||ar
wate123_Jun has quit [Ping timeout: 252 seconds]
wate123_Jun has joined #ste||ar
Abhishek09 has joined #ste||ar
wate123_Jun has quit [Ping timeout: 252 seconds]
wate123_Jun has joined #ste||ar
wate123_Jun has quit [Ping timeout: 252 seconds]
gonidelis has joined #ste||ar
wate123_Jun has joined #ste||ar
wate123_Jun has quit [Ping timeout: 252 seconds]
ibalampanis has joined #ste||ar
ibalampanis has quit [Remote host closed the connection]
Abhishek09 has quit [Remote host closed the connection]
wate123_Jun has joined #ste||ar
wate123_Jun has quit [Ping timeout: 240 seconds]
Abhishek09 has joined #ste||ar
nikunj97 has joined #ste||ar
<Abhishek09> nikunj97: ?
hkaiser has joined #ste||ar
wate123_Jun has joined #ste||ar
wate123_Jun has quit [Ping timeout: 252 seconds]
wate123_Jun has joined #ste||ar
simbergm has quit [Ping timeout: 240 seconds]
hkaiser has quit [Ping timeout: 240 seconds]
simbergm has joined #ste||ar
<zao> simbergm: Yes, BOOST_ROOT is in the environment, and the CMake build detects Boost.
<zao> libphylanxd.so links correctly, while the Python module doesn't.
hkaiser has joined #ste||ar
vip3r has joined #ste||ar
vip3r is now known as kale_
<hkaiser> simbergm: yt?
<simbergm> hkaiser: zao yep, here
wate123_Jun has quit [Remote host closed the connection]
wate123_Jun has joined #ste||ar
<hkaiser> simbergm: I'm not too fond of the #pragma once PR, but I can say why :/
<kale_> hkaiser: I wrote a completely futurized implementation (taking inspiration from your CppCon presentation) of matrix multiplication. Please go through it and let me know if it suffices for the submission. [https://github.com/git-kale/GSoC-submission/blob/master/matrix_multiplication.cpp]
<hkaiser> might be I'm too old fashioned
<hkaiser> kale_: nice, will have a look!
<simbergm> reason I asked I about the minimum version is that since 3.12 cmake defaults to using *_ROOT variables to find packages, and we rely on that when we look for boost
<simbergm> hkaiser: :P
<simbergm> I don't know what to say to that
<hkaiser> yah
<hkaiser> just say ' shut up, you're an old fart and need to adapt to the future' or something ;-)
<simbergm> I would've stayed, but I got bitten by the header guards too often (probably speaks about my abilities more than header guards)
<simbergm> and I haven't yet encountered a strong enough reason not to use #pragma once
<simbergm> hkaiser: your words ;)
<Yorlik> hkaiser: I thought you believe in the future(s) ? ;)
<hkaiser> simbergm: I understand the rationale - chaging a lifelong habit is not easy ...
<zao> simbergm: The core reason why one might not want #pragma once is when you've got copies of a header in several places in a file system, where it envisionably can be pulled in multiple times in a TU.
<simbergm> for the phylanx cmake problems I think we need to require cmake 3.13 even in HPXConfig.cmake, and make the our other find_package(Boost)'s loud (i.e. the ones that look for iostreams, program_options, etc.)
<kale_> diehlpk_mobile[m:, nikunj : Here is a link to my repository explaining how I will handle static library dependencies in the project [https://github.com/git-kale/GSoC-submission/tree/master/python_example]
<hkaiser> simbergm: I don't have any reason not to use them either, except for -- well -- a lifetime of not using them ;-)
<zao> When I surveyed compilers in the past, there were varying behaviour around symlinks and hardlinks as well, particularly around network filesystems.
<simbergm> * for the phylanx cmake problems I think we need to require cmake 3.13 even in HPXConfig.cmake, and make our other find_package(Boost)'s loud (i.e. the ones that look for iostreams, program_options, etc.)
<hkaiser> simbergm: yah
<simbergm> zao: I'm aware, and I don't see a good reason to have the same files in multiple locations
<simbergm> that's the only argument I've heard against it
<hkaiser> simbergm: Alireza at some point found out how you can prevent cmake from using the boost system libraries
<hkaiser> but both of us have forgotten what it was :/
<zao> Didn't we use to ship some Boost libraries locally?
<zao> Good reasons and actual reality tend to be two different things :D
<zao> But sure.
<simbergm> but I'll go and change them back when we get the first bug report because of that ;)
<hkaiser> kale_: looks nice - will probably be not too performant, but hey - that wasn't the point, was it?
<simbergm> hkaiser: most likely hkaiser, zao I'll make a
<hkaiser> kale_: what GSoC project would you interested in working on?
<simbergm> oops
<simbergm> most likely Boost_NO_SYSTEM_PATHS
<zao> hkaiser: In EasyBuild when we want to strive to not use system Boost, we tell CMake BOOST_ROOT, Boost_NO_SYSTEM_PATHS=ON and Boost_NO_CMAKE=ON.
<hkaiser> simbergm: could be
<hkaiser> zao: does that help with the phylanx issue?
<zao> (the last thing would disable the targets and stuff too, so not quite desirable in general)
<simbergm> hkaiser, zao I'll make a PR anyway with some changes which may be causing that, you can try it out and see if it helps
<simbergm> those changes will be good to have anyway
<zao> hkaiser: The core problem here is that the build itself finds Boost just fine, all paths are correct.
<hkaiser> zao: Phylanx imports boost indirectly through the HPX targets
<simbergm> zao: we have some quiet find_package(Boost)s which may not be showing up...
<hkaiser> so it has to be some cmake snafu
<zao> The HPX libraries are built against the right one, haven't looked at the targets yet.
<zao> INTERFACE_LINK_LIBRARIES "HPX::hpx_public_flags;hpx::boost;hpx::boost::program_options;hpx::boost::filesystem;hpx::allocator;hpx::hwloc;HPX::hpx_base_libraries;HPX::hpx_interface"
<zao> Any particular reason why some of those are lowercase btw?
<hkaiser> zao: that's simbergm's 'magic'
<hkaiser> I'll let him explain
<simbergm> zao: the lowercase hpx:: ones are from the beginning of our cmake cleanup efforts, and are really internal targets
<hkaiser> (btw, I don't like that too much - but hey - I'm happy simbergm and rori keep the build system rolling)
<simbergm> they're not exported targets, and have been stuck with lowercase hpx::
<Yorlik> Is there a special HPXish way to make a shared pointer from a component inside the component, when you have this available? Or do I just use std C++ semantics?
<simbergm> so no good reason
<hkaiser> simbergm: could we rename the hpx:: targets to hpx_internal:: or something?
<hkaiser> or even HPX::internal::?
<simbergm> they're not really intended for public use
<hkaiser> Yorlik: a shared_ptr to yourself?
<simbergm> hkaiser: which part don't you like? ;)
<Yorlik> hkaiser: yes
<hkaiser> the hpx::/HPX:: naming
<hkaiser> people will not remeber which to use
<simbergm> hkaiser: yeah, definitely
<hkaiser> hell, I don't remember which to use ;-)
<simbergm> hkaiser: yeah, agreed, as I said the hpx:: targets aren't public, we just can't hide them (but we can rename them)
<hkaiser> right
<simbergm> there are really only HPX::hpx, HPX::x_component that users should use
<hkaiser> simbergm: I'm not complaining, it's more of a QOI issue
<rori> yeah sorry I started to name the target hpx::<lib> before we added the namespace but now that's not very pretty ^^
<simbergm> I noticed btw that there was at least one HPX::hpx_init in phylanx now, even that isn't mean to be public
<hkaiser> rori: no need to apologize, your work is absolutely appreciated!
<rori> we could actually just remove the little one in all targets that uses it
<hkaiser> simbergm: ahh! was not aware of that
<zao> simbergm: I'm looking in my HPX CMakeCache.txt and I can't find any Boost_FILESYSTEM_LIBRARIES or similar entries, while they're referenced in HPX_SetupBoostFilesystem.cmake
<rori> use*
<zao> simbergm: HPX was built with Boost in CPATH and LIBRARY_PATH. Do you think this might have an impact on how things were configured?
<rori> zao: The Boost_FILESYSTEM_LIBRARIES are defined while doing find_package(Boost COMPONENT filesystem)
<rori> when you specify the component, the corresponding variables are usually set
<zao> I'm going to have to --trace this, aren't I?
<simbergm> zao: give me few more minutes, I'll get you a PR to try out
<rori> sorry I haven't followed the full conv ^^
<rori> thanks ms
<zao> rori: TL;DR - the Phylanx python module links with `-lboost_filesystem` and doesn't specify any library search path, pulling in the system one instead.
<rori> Ah ok thanks for the update!
<zao> (on my system, that is, no idea how a proper one handles it)
weilewei has joined #ste||ar
<simbergm> it helps to use the right source directory when testing... getting there
<simbergm> it should at the very least be more verbose when finding program_options and filesystem
<simbergm> we should also not define our interface targets for boost, findboost already provides them...
K-ballo has quit [Remote host closed the connection]
K-ballo has joined #ste||ar
nan11 has joined #ste||ar
<zao> Queued up a build.
diehlpk_work has joined #ste||ar
<diehlpk_work> March 31 18:00 UTC Student application deadline
<diehlpk_work> Be aware that the deadline for GSoC is tomorrow. Looking forward to read all the excellent proposals
<jbjnr> didn't they extend it due to the coronavirus
<hkaiser> jbjnr: not the student deadline
<kale_> hkaiser: I am interested to work on making pip package for Phylanx.
<hkaiser> kale_: wouldn't it be better for you to work on something HPX related?
<zao> Time to build a Phylanx.
<weilewei> jbjnr the Concurrent Data structure Support project sounds fun, I am interested in maybe auditing, if possible
<hkaiser> kale_: now, that you have shown you know c++ and HPX
<gonidelis> diehlpk_work have you taken a lok at my proposal? I would be glad to hear some extra advise ( hkaiser has already helped a lot!)...
<diehlpk_work> jbjnr, Only for the mentors
iti has joined #ste||ar
<weilewei> jbjnr is this project going to implement concurrent_unordered_set, concurrent_vector, according to bug tracker: https://github.com/STEllAR-GROUP/hpx/issues/2235
<diehlpk_work> We have one or two weeks more to review
<jbjnr> weilewei: don't worry - we won't get a decent proposal for theat project so it will not go ahead
<zao> :D
<weilewei> jbjnr that's sad T^T
<hkaiser> jbjnr: come on, don't be so negative ;-)
<diehlpk_work> gonidelis, Now, I had so many proposals for the projects, I mentor and will not have time to look into yours
<Abhishek09> zao: Have u build static version of hpx?
<diehlpk_work> Abhishek09, I think heller1 has
<hkaiser> Abhishek09: long time ago, not sure if this is still functional - simbergm?
<Abhishek09> diehlpk_work: have u seen my proposal?
shahrzad has joined #ste||ar
<hkaiser> weilewei: what do you mean by 'auditing'?
<nikunj97> hkaiser, last time I checked, it was working nicely
<kale_> hkaiser: I have been researching about the project for nearly a month now and I have a clear idea of how the issues related to the project can be resolved. With only a day remaining, I dont think I can write a proposal and do justice with it :). I am willing to work in the org even after the gsoc project.
<nikunj97> that last time was 2 years ago :P
<zao> Abhishek09: I don't see any meaningful way to build HPX statically as there's shared libraries involved with components.
<zao> The design might be in a way that that works out, but I'm not familiar with how it's structured.
<hkaiser> kale_: ok
<diehlpk_work> Abhishek09, Not yet. I will work on my backlog today
<iti> Hello Patrick, will you be free to look into my proposal?
<Abhishek09> hkaiser: How u have done? Does it take longer than dynamic version?
<diehlpk_work> iti, This is what I am doing right now
<simbergm> Abhishek09: hkaiser static isn't functional at the moment
<iti> Thank you!
<simbergm> it's on the list but priorities...
<hkaiser> simbergm: yah, thought so - let's wait for somebody actually asking for it
<hkaiser> Abhishek09: not sure what you're asking
<simbergm> I think it'll be quite easy after we've done some more modules and cmake clenaup
<Abhishek09> hkaiser: How u have buid hpx statically ? Does it take longer than dynamic version of hpx?
<hkaiser> Abhishek09: I don't think that's releavant, and why should it take longer to build a static version?
<hkaiser> zao: looks ok so far
<weilewei> hkaiser oh I mean, I want to be part of mentor as well, if I can help, and at the same time, learn the project
<nikunj97> hkaiser, see pm please
<Abhishek09> hkaiser: if u r free see my proposal
<weilewei> hkaiser I implemented unordered_map in phylanx before, and would like to see how concurrent containers are going to be.
<Abhishek09> nikunj97: Have u build hpx statically recently?
<hkaiser> weilewei: do you want to apply to gsoc for this project?
<weilewei> hkaiser Can I as well?
<Abhishek09> hkaiser: And don't forget to give feedback on my proposal
<zao> simbergm: Link still does the bad thing.
<simbergm> zao: bleh :/
rtohid has joined #ste||ar
<Abhishek09> rtohid: hi
<rtohid> hey.
<nikunj97> Abhishek09, I haven't, sorry
<nikunj97> weilewei, sure, you can
<hkaiser> weilewei: absolutely
<nikunj97> the deadline is tomorrow though, so make a proposal quickly ;)
<simbergm> zao, hkaiser : so do you have more output? you say one target does the right thing and another doesn't?
<weilewei> hkaiser ahh!! Ok, I will write something then
<hkaiser> weilewei: sure, please do - jbjnr would be delighted to get a decent proposal, I'm sure
shahrzad has quit [Ping timeout: 240 seconds]
<Abhishek09> rtohid: i sharing a proposal with you in private . See and givve feedbacks and helps in improving it
<weilewei> hkaiser sure, I will contact jbjnr
<zao> simbergm: Link commands for libhpx_phylanxd.so.0 and _phylanxd.cpython-37m-x86_64-linux-gnu.so: https://gist.github.com/zao/75f7096f1e4e921648642881804b43bc
<Abhishek09> hkaiser: Which project you mentoring this year?
<diehlpk_work> iti, Looked at your proposal and added few comments.
<hkaiser> simbergm: not sure whether it's an issue of targets or not - HPX built for zao, but Phylanx not - even if Phylanx gets its boost dependency through hpx targets
<zao> Note how the link for libphylanx uses absolute paths to Boost libraries, while the link for the Python extension uses -lblargh
<hkaiser> Abhishek09: not sure yet
<hkaiser> most like ly something hpx related
<zao> It's not just Boost, it also does that for hwloc and openblas.
<Abhishek09> hkaiser: can u help in building hpx statically?
<hkaiser> Abhishek09: not really - why do you need that?
<simbergm> hkaiser, zao: I'll have to look at phylanx
<simbergm> clearly those two targets are doing something differently
<simbergm> are there more targets that don't link correctly?
<hkaiser> simbergm: the python extension is built using the pybind11 macros
<zao> I don't know.
<weilewei> jbjnr I am interested in implementing concurrent_unordered_set and concurrent_vector in HPX for GSoC project. How do you think? I can write a proposal draft for you today.
<hkaiser> simbergm: while it's a shared library, it still needs hpx::hpx_init, however
<simbergm> uhh, so it may be doing some magic...
<simbergm> hkaiser: why?
<hkaiser> weilewei: see #2235
<simbergm> it defines a main?
<hkaiser> simbergm: it launches HPX while being an Python extension module
<weilewei> hkaiser yes, I saw it
<Abhishek09> hkaiser: i need static build of hpx for making wheel file
<simbergm> hkaiser: hmm, you need `hpx_init.hpp` but not `libhpx_init`
<simbergm> `libhpx_init` just contains all the optional entrypoints
<simbergm> and gives you the wrapping stuff now as well after that one pr was merged...
mdiers_ has quit [Remote host closed the connection]
mdiers_ has joined #ste||ar
<nikunj97> hkaiser, don't we have concurrent vectors already?
<nikunj97> I thought we did
Guest295 has quit [Ping timeout: 240 seconds]
pfluegdk[m] has quit [Ping timeout: 240 seconds]
diehlpk_mobile[m has quit [Ping timeout: 240 seconds]
ibalampanis has joined #ste||ar
kale_ has quit [Ping timeout: 240 seconds]
<ibalampanis> Hello to everyone! Have a nice day!
<rtohid> Abhishek09, please see DM for my opinions, but please get in touch with mentors in regard to the proposal.
<ibalampanis> hkaiser: I just ended the mini project with mm. HPX is a very intersting tool!
<ibalampanis> interesting*
Guest295 has joined #ste||ar
<simbergm> hpx_init should depend on hpx though...
<zao> simbergm: Same result.
pfluegdk[m] has joined #ste||ar
diehlpk_mobile[m has joined #ste||ar
karame_ has joined #ste||ar
<Abhishek09> nikunj97: I m not able to static build hpx . Please help
weilewei has quit [Remote host closed the connection]
<Abhishek09> zao^
<Abhishek09> zao ^
<zao> Abhishek09: I've already mentioned what I believe the likely result of trying to build HPX statically will be.
<zao> Fairly sure that I in the past have also suggested that an all-static approach is unlikely to work.
<Abhishek09> that means static build hpx will not work zao
<zao> Abhishek09: The concerns I have is that HPX is kind of designed around multiple collaborating shared libraries, both the core HPX library and the pluggable components.
<Yorlik> I just managed to throw an exception in the <memory> header in a function marked noexcept :(
<Abhishek09> zao: nikunj97 say ` Manylinux to wheel conversions require static linking. While libraries like Boost are linking statically, you're not building static versions of HPX and Blaze`
<zao> While it may be possible to combine them as multiple static libraries into something big in the end, it's quite likely to need a fair amount of plumbing work.
<zao> Particularly threading all the dependencies through the builds can be hard.
<Abhishek09> That' why i need it
<Abhishek09> zao
<zao> It may still be possible to privately deploy shared libraries in the wheel, as long as those do not come from system packages.
<zao> Seems like that's one of the things that auditwheel's repair does.
<zao> As I'm not mentoring this, I can't really say anything about what kind of plan you should have, but it might be worth considering that you will have to try several different approaches.
weilewei has joined #ste||ar
ibalampanis has quit [Remote host closed the connection]
Abhishek09 has quit [Remote host closed the connection]
Abhishek09 has joined #ste||ar
bita has joined #ste||ar
weilewei81 has joined #ste||ar
<Abhishek09> nikunj97?
tarzeau has quit [*.net *.split]
tarzeau has joined #ste||ar
weilewei has quit [Ping timeout: 240 seconds]
rori has quit [Ping timeout: 246 seconds]
pfluegdk[m] has quit [Ping timeout: 260 seconds]
diehlpk_mobile[m has quit [Ping timeout: 240 seconds]
Guest295 has quit [Ping timeout: 240 seconds]
simbergm has quit [Ping timeout: 240 seconds]
jbjnr has quit [Ping timeout: 246 seconds]
heller1 has quit [Ping timeout: 256 seconds]
freifrau_von_ble has quit [Ping timeout: 260 seconds]
gdaiss[m] has quit [Ping timeout: 260 seconds]
kordejong has quit [Ping timeout: 260 seconds]
<Yorlik> hkaiser: will every sleep_for yield or is there a threshold?
Abhishek09 has quit [Remote host closed the connection]
<hkaiser> Yorlik: sec
<Yorlik> sure - np
ibalampanis has joined #ste||ar
nikunj has quit [Read error: Connection reset by peer]
nikunj has joined #ste||ar
iti has quit [Ping timeout: 240 seconds]
weilewei81 has quit [Remote host closed the connection]
<hkaiser> Yorlik: sleep_for always yields, I think
<Yorlik> Allright. Thanks! Actually that makes sense - so the user is in control if he wants a short busy loop.
<hkaiser> Yorlik: I was wrong
<hkaiser> it spins like the spinlock before yielding
<Yorlik> So a very short threshold.
<Yorlik> I wonder if you need that at all.
<hkaiser> Yorlik: in the end it will spin for anything below 1us or so
<hkaiser> perhaps slightly more
<hkaiser> 5us, maybe
<Yorlik> IC - I kinda have difficulties seeing the busy loop.
<Yorlik> The for seems to have a grain size though
simbergm has joined #ste||ar
<simbergm> zao: thanks for trying! I'll have to try to reproduce it myself, I'm out of ideas
<simbergm> you're building this on one of your clusters, right?
<simbergm> Abhishek09: FYI, static linking not working is a known issue: https://github.com/STEllAR-GROUP/hpx/issues/3970
<simbergm> it's most likely quite doable to get it working again, so if you really require it for your approach, you can add a step in your timeline to fix it
<hkaiser> Yorlik: look at the yield_k function that is called from the timed loop (it's defined in the same file, further up)
<Yorlik> So the loop is inside that?
<hkaiser> look at it
<hkaiser> Yorlik: kmbailey@lsu.edu
<Yorlik> I see the loop now
<Yorlik> k<4
<Yorlik> And that macro
<hkaiser> right, assuming the overheads for getting the timer value is ~300ns, it will spin a bit longer than 1us
<Yorlik> To gain a wee bit of eeficiency I see room for optimization here - having a no-wait sleep yield
<hkaiser> Yorlik: go for it
<Yorlik> Especially if you don't need precission too much
<Yorlik> Could be a template parameter with a default value
jbjnr has joined #ste||ar
diehlpk_mobile[m has joined #ste||ar
<hkaiser> Yorlik: yah, the existing API should still work unchanged, it's standardized after all...
<Yorlik> I probably won't need it. I have a postion in my code where I can get this lua state explosion, because I am not using an executor - I might just change that
<Yorlik> So I was looking into dodging lua state requests again and that question came up
<Yorlik> Using the limiting executor mechanics is just better
<Yorlik> Though it bears a tiny risk of deadlocks
<Yorlik> A limiting of task numbers intheory can deadlock, if the tasks being diodged are required for the rest to continue
<Yorlik> Is HPX error: {what}: bad function call: HPX(unhandled_exception) the result of an action called on a deleted component?
<hkaiser> most probably something has gone out of scope, yes
<Yorlik> I'm experimenting with object deletion and have to clean up some issues
<Yorlik> Since my object model doesn't work with C++ standard constructor/destructor calling but used fixed memory slots everything is a bit unusual.
<hkaiser> apply<Action>(id, ...) should keep things alive, so I don't think this is caused by some HPX problem
<Yorlik> I think it's my code - I'll figure it out.
<hkaiser> Yorlik: you should still call constructors/destructors even if you manage your memory yourself
<Yorlik> I do
<hkaiser> then I don't understand what you just said
<Yorlik> I explicitely call the entity dtor when the memory is released to properly clean up everything, especially the components.
<hkaiser> nod
<Yorlik> The creation is a series of allocation and initialization and used from different interfaces (local / nbetwork / entity, etc) It's a bit messy and I need to streamline it and clean it up.
<Yorlik> There are quirks and errors from a time when I understood much less of C++.
<Yorlik> BTW
<Yorlik> Changing that compnent backlink to a shared_ptr exploded everything
<hkaiser> no idea what 'exploded'
<Yorlik> I created it in the standard way and not with get_ptr and it tried to delete the component prematurely :)
<hkaiser> sure
kordejong has joined #ste||ar
<Yorlik> When the object dies, the entity is killed first
<hkaiser> what did you expect
mdiers[m] has joined #ste||ar
pfluegdk[m] has joined #ste||ar
freifrau_von_ble has joined #ste||ar
ms-test[m] has joined #ste||ar
rori has joined #ste||ar
gdaiss[m] has joined #ste||ar
heller1 has joined #ste||ar
<zao> simbergm: I'm building this on my build machine at home, using compilers and libraries from my EasyBuild-built module tree.
<Yorlik> I didn't expect anything. It's not like everything is always 100% under control.
<Yorlik> It was just a stupid mistake - somehow I can'Ät get rid of making mistakes.
* Yorlik shrugs
<hkaiser> Yorlik: why do you want to hold a shared_ptr to yourself?
<Yorlik> The entity is managed by the object it holds a ref to. Its a circular dependency since logically they are ob ovject
<Yorlik> one object
<hkaiser> ok
<hkaiser> use unmanaged(id) to break the circular reference
<Yorlik> Doing it this way allowed me to have this very efficient contiguous storage in memory
<Yorlik> But it comes at a price.
<zao> I'm going to see if I can understand Docker and see if I can repro it in the phylanx:prerequisites container.
Guest58431 has joined #ste||ar
wate123_Jun has quit [Remote host closed the connection]
<gonidelis> how many days do you think are sufficient for benchmarking a range based parallel algorithm ???
<gonidelis> @hk
<gonidelis> hkaiser *
wate123_Jun has joined #ste||ar
<hkaiser> gonidelis: you can spend a life-time doing perf benchmarks
<hkaiser> I don't think you should add that to the proposal, our range based algos are just wrappers around the iterator-based ones, so I'd expect no significant difference in timings
<hkaiser> if the performance is bad it's not your fault, and if it's good, it's not your 'fault' either ;-)
<gonidelis> hmm... I get it. I think that benchmarking should not be excluded though (just for safety reasons). I will propose just a small part of benchmarking throughout the period and I will try to complete the task of benchmarking independently of GSoC on autumn.
<gonidelis> What about unit-testing???
<gonidelis> Are they necessary? How many days could they take per algo?
<hkaiser> gonidelis: adding tests is more important
<hkaiser> writing the tests shouldn't be hard as you can use the existing iterator-based ones and adapt them
<hkaiser> perhaps half a day per algo, if not less
<gonidelis> I get it. Thank you very much for your help.
Hashmi has joined #ste||ar
rtohid has left #ste||ar [#ste||ar]
<simbergm> K-ballo: you want to say something about pragma once? ;)
<simbergm> well, something more than you've already said?
<simbergm> at least say we're going to regret it...
<K-ballo> I said my part
<nikunj97> simbergm, where are we using pragma once?
<nikunj97> more importantly why are we using them?
<nikunj97> I don't like them
<nikunj97> and I feel most of the people here will concur.
<simbergm> nikunj97: please comment on https://github.com/STEllAR-GROUP/hpx/pull/4474 in that case?
<simbergm> what are your reasons for not liking them? I prefer them because they require a single line for declaring that a header should only be included once, and one doesn't have to come up with a unique name for the header guards (less error prone)
* Yorlik likes #pragma once and uses it throughout his codebase
Abhishek09 has joined #ste||ar
<Abhishek09> nikunj97: i am not able to build hpx statically. Please help!
<nikunj97> simbergm, they're not C++ standards conforming
<nikunj97> afaik
<nikunj97> Abhishek09, what is the error?
<Abhishek09> nikunj97: How i procceed to build ststically?
<simbergm> nikunj97: I'm aware, but in practice it's supported on all the compilers that we intend to support (plus some)
<Yorlik> nikunj97 - it's true - it's not standard, but all the big compilers support it and you can easily change it, if you ver need to - you could even let a python script do the work for you if needed.
<simbergm> if in the future we do actually want to support exotic compilers that don't support pragma once, we can very easily transform them back to headers
<Abhishek09> Please give documentation to build hpx statically nikunj97
<simbergm> header guards
<nikunj97> simbergm, i'm just skeptic about using anything not standards conforming at a place where there's solution for one
<simbergm> Abhishek09: I think it's been mentioned several times already that HPX currently can't be built statically
<nikunj97> but I'm just a guy with a few years (read 2) of C++ experience
<simbergm> HPX_WITH_STATIC_LINKING would be the option to enable it, but it doesn't work currently
<nikunj97> modern C++ ^^
<Yorlik> nikunj97: I think that scepticism is totally okay. But in that case I think you're not in danger of really shooting yourself in the foot by using the pragmas.
<Abhishek09> simbergm: but why nikunj97 has told me to build statically . Does nikunj97 doesn't aware that?
<nikunj97> Abhishek09, no I was not aware of it
<simbergm> Abhishek09: he may not be, but you can also make it a task to fix our static build
<nikunj97> and still yes, you will need HPX statically build for manylinux
<simbergm> it should not be a massive task
<simbergm> I don't think our static build being currently broken should rule out the option of having a static build in the future
<Abhishek09> nikunj97: May i leave it as it is my proposal?
<Abhishek09> we will discuss later
<nikunj97> but then that would mean that your implementation is incorrect
<nikunj97> You can remove the cmake scripts that you're using
<nikunj97> and simply state that you want to statically build them
karame_ has quit [Ping timeout: 240 seconds]
<Abhishek09> nikunj97 : Only i have to mention hpx,blaze,blaze_tensor to build statically . Then everything will be fine ?
<nikunj97> yes
<Abhishek09> pyhlanx, pybind11 also building dynamically . Do u forget that?
<Abhishek09> nikunj97
<nikunj97> Abhishek09, phylanx is our host application
<nikunj97> it's the one that needs to be converted to the wheel
<Abhishek09> Pybind11?
<Abhishek09> nikunj97 : but all will be bundled with phylanx ,
<nikunj97> well, yes that's an issue
<Abhishek09> i have also buid pybind11 dynamically
<nikunj97> Abhishek09, come up with a solution :)
<Abhishek09> nikunj97
<Abhishek09> ^
<nikunj97> Abhishek09, idk, you'll have to come up with a solution to it
<Abhishek09> for phylanx?
<Abhishek09> nikunj97
<nikunj97> yes
<Abhishek09> What about pybind11?
<Abhishek09> nikunj97
<nikunj97> Abhishek09, you may want to use auditwheel repair for it iirc
<nikunj97> that's what it's made for. Change the RPATH entries of these libraries
<Abhishek09> yes it is used in manylinux
<nikunj97> Abhishek09, at the end, we want a pip package
<ibalampanis> @hk
<ibalampanis> hkaiser: bita is a great person! Thank you all for your support!
<nikunj97> ibalampanis, she's nice to work with ;)
akheir has joined #ste||ar
rtohid has joined #ste||ar
<Abhishek09> nikunj97: auditwheel also achieve a similar result as if the libraries had been statically linked without requiring changes to the build system
nikunj has quit [Read error: Connection reset by peer]
nikunj has joined #ste||ar
<Abhishek09> nikunj97?
<zao> Abhishek09: I recommend that you read up onauditwheel's "repair" function.
nikunj has quit [Read error: Connection reset by peer]
<zao> In particular, understand what it does and why that means that the libraries become independent of the host system.
<zao> Also, if you're quoting something verbatim as your "auditwheel also achieve a similar result as if the" statement there, you should probably use quotation marks.
<zao> As it stands, it could be anything from a question to a statement from you personally, to heaven knows what.
<zao> It's apparently a quote directly from the auditwheel readme.
nikunj has joined #ste||ar
<Abhishek09> already read it simply copy shared library to wheel intself zao
<zao> That is not accurate.
<Abhishek09> zao by using auditwheel repair , does we not need static building?
<zao> Static linking and private deployment of shared libraries are two ways of achieving the same goal - the goal of having a self-contained package tree.
<Abhishek09> zao: please elaborate?
<zao> I assumed that you could find the documentation...
<bita> Thanks Ilias and nikunj :"D
nikunj has quit [Remote host closed the connection]
nikunj has joined #ste||ar
<Abhishek09> But i m not sure that's why i taking advice from you `by using auditwheel repair , does we not need static building`
<Abhishek09> zao
<zao> If we ignore what tools could do for us, the ultimate goal is to have an installation of Phylanx that does not depend on anything but the base system libraries (like libstdc++.so.6), right?
<zao> As there are library dependencies for Phylanx, those need to go inside the Phylanx installation in some way.
<zao> One way of doing so would be to static-link all the dependencies into the Phylanx extension, but that requires that all dependencies can be built as static libraries.
<zao> Another way of doing so is to add shared libraries into the directory tree of the extension somewhere, and make the extension find them at runtime.
<zao> There's two primary ways of finding shared libraries that are not in a system location on Linux.
<zao> One is by setting the LD_LIBRARY_PATH environment variable, which indicates additional locations to search for libraries.
nikunj97 has quit [Ping timeout: 252 seconds]
<zao> The other is a feature of the ELF file format of shared libraries and executables called RPATH (there's also RUNPATH). It allows you to embed a list of places to find a library, used by the loader at runtime.
<zao> RPATH has a feature where you can specify relative paths to libraries, so it always searches in a place relative to the ELF file that requires the libraries.
<zao> This is "origin-relative".
<zao> Now, auditwheel leverages exactly the above - it copies shared libraries into the directory tree and sets a relative RPATH in the extension to make it find the libraries at runtime.
<gonidelis> hkaiser I have updated my draft in order to commit my final proposal. Some final fixed have been made according to your suggestions. Plz check your mail. Thank you for your time ;)
<gonidelis> fixes*
rtohid has quit [Ping timeout: 240 seconds]
<hkaiser> gonidelis: will be able to look later tonight only
Hashmi has quit [Quit: Connection closed for inactivity]
rtohid has joined #ste||ar
<gonidelis> I would be glad.
ibalampanis has quit [Remote host closed the connection]
rtohid has quit [Ping timeout: 240 seconds]
nikunj97 has joined #ste||ar
<nikunj97> zao, do you know anything about this error: srun: error: task 0 launch failed: Invalid MPI plugin name
<nikunj97> I'm getting it irrespective of any executable i run with srun
<nikunj97> and it's only specific to one node
<nikunj97> one type of node ^^
<zao> Not off-hand.
<zao> I assume you've built with the right MPI implementation and have the module loaded?
<nikunj97> yes
<nikunj97> and it's with a hello world c++ code btw
<nikunj97> irrelevant to MPI
<zao> Does Slurm work if you salloc or sbatch first?
<nikunj97> let me try
<nikunj97> btw slurm works for other nodes
<nikunj97> like if I get one from arm nodes
<nikunj97> then it runs the executable
wate123_Jun has quit [Remote host closed the connection]
<nikunj97> zao, salloc works perfectly
Abhishek09 has quit [Ping timeout: 240 seconds]
<nikunj97> sbatch works as well
<nikunj97> why is srun giving me a headache then :/
<zao> I mean, can you srun from an allocation?
<nikunj97> nope
<nikunj97> that still gives the same error
<nikunj97> again, I can srun into any type of nodes. But there's one type which is giving this error.
<nikunj97> any clue, what could be happening?
mdiers[m] has left #ste||ar ["Kicked by @appservice-irc:matrix.org : Idle for 30+ days"]
ms-test[m] has left #ste||ar ["Kicked by @appservice-irc:matrix.org : Idle for 30+ days"]
Abhishek09 has joined #ste||ar
<Abhishek09> nikunj97: Where it is mentioned that ` Manylinux to wheel conversions require static linking`?
<nikunj97> I read it in some blog, but that was not using auditwheel
<nikunj97> that's why I told you later that auditwheel repair can make it happen
<Abhishek09> i have confirmed with #python , they told no such thing there
<Abhishek09> nikunj97
<Abhishek09> #pypa also told same thing
<nikunj97> not sure how you'll package dynamic libraries without auditwheel
<Abhishek09> Can you have website of that blog ? i will give to pypa and python commumity
<nikunj97> Abhishek09, will have to dig in somewhere
<nikunj97> heller1, btw do you think if I'll get any caching benefits with vector<vector<>>?
<Abhishek09> But they told me confidently , i not sure who have right information. They also told me to manylinux guidelines to ensure that fact nikunj97
<heller1> No!
<heller1> On the contrary
<nikunj97> Abhishek09, I may be wrong, sorry
<nikunj97> heller1, why not?
<nikunj97> It'll add a line in cache and that should help
<nikunj97> how will cache work for a single stream of vector?
<Abhishek09> zao: What ur thought regarding this fact ` Manylinux to wheel conversions require static linking`?
<heller1> nikunj97: extra layer of indirection.
<heller1> nikunj97: on every access.
<heller1> horror for each CPU
<nikunj97> aah! from CPU perspective sure
<heller1> but try it out, run it through vtune or somesuch and compare the hardware counters
<nikunj97> but would that not be better for cache if all the required lines are already present?
<heller1> try it, and then explain what you see
wate123_Jun has joined #ste||ar
<nikunj97> heller1, alright. Will give it a try
<nikunj97> heller1, I forgot to ask you
<nikunj97> do you know about tiling in stencil?
<nikunj97> chapel implemented diamond tiling in 2d stencil which gave them a good boost in performance, about 4x scaling
<heller1> There's ton of material
<heller1> Ask me again tomorrow
<heller1> I'll give you lots
<nikunj97> alright!
<nikunj97> I want to optimize it as far as I can
<nikunj97> want to show HPX's potential in parallelism
shahrzad has joined #ste||ar
<nan11> cannot open blaze library
gonidelis has quit [Ping timeout: 240 seconds]
nan11 has quit [Ping timeout: 240 seconds]
shahrzad has quit [Ping timeout: 252 seconds]
shahrzad has joined #ste||ar
Abhishek09 has quit [Remote host closed the connection]
weilewei has joined #ste||ar
shahrzad has quit [Ping timeout: 252 seconds]
<nikunj97> zao, the node was down xD
<zao> :D
<zao> sinfo is your friend there, btw.