hkaiser changed the topic of #ste||ar to: STE||AR: Systems Technology, Emergent Parallelism, and Algorithm Research | stellar.cct.lsu.edu | HPX: A cure for performance impaired parallel applications | github.com/STEllAR-GROUP/hpx | Buildbot: http://rostam.cct.lsu.edu/ | Log: http://irclog.cct.lsu.edu/
mbremer has quit [Quit: Leaving.]
eschnett_ has joined #ste||ar
hkaiser has quit [Read error: Connection reset by peer]
detan has quit [Ping timeout: 256 seconds]
daissgr has joined #ste||ar
mdiers_ has joined #ste||ar
detan has joined #ste||ar
daissgr has quit [Ping timeout: 268 seconds]
daissgr has joined #ste||ar
<detan>
Getting this while trying to compile compute/cuda examples with clang -> clang: error: cannot find libdevice for sm_52. Any ideas?
K-ballo has joined #ste||ar
<simbergm>
heller_: what's the story behind 3711?
<simbergm>
diehlpk_work: release is tagged and uploaded, only announcements missing
detan has quit [Ping timeout: 256 seconds]
<heller_>
simbergm: as the ticket says, she wrote her thesis and is now asking for review ;)
eschnett_ has quit [Quit: eschnett_]
bibek has quit [Quit: Konversation terminated!]
hkaiser has joined #ste||ar
<simbergm>
heller_: all right :)
<simbergm>
btw, do you know how to fix rostam?
<heller_>
didn't take a look yet
<heller_>
i am in heidelberg all week
<simbergm>
ok, no worries
<simbergm>
files are missing (wonder if I did something...)
<simbergm>
I'll have a look
bibek has joined #ste||ar
aserio has joined #ste||ar
<hkaiser>
simbergm: what's wrong with rostam?
<simbergm>
hkaiser: good question
<simbergm>
buildbot directory is gone >_>
<hkaiser>
I meant: what's the symptoms?
<hkaiser>
uhh
<hkaiser>
akheir: yt?
<hkaiser>
aserio: if Ali comes in, could you give him a heads-up, please?
<hkaiser>
simbergm: Ali was struggling with filesystem problems for a while now
<simbergm>
ok
<simbergm>
symptom is obviously that nothing runs
<aserio>
hkaiser: will do
<hkaiser>
thanks
<simbergm>
thanks hkaiser, aserio
<diehlpk_work>
simbergm, I will start the build later today and once they finished tomorrow, I will update f28.29.30
<heller_>
simbergm: hkaiser: jbjnr_: sent you more info on #3711
<heller_>
it's in german, unfortunately ;)
Abhishek09 has joined #ste||ar
<hkaiser>
thanks
<hkaiser>
K-ballo: yt?
<Abhishek09>
hello everyone
<akheir>
hkaiser: Hi
<hkaiser>
akheir: hey
<Abhishek09>
whats oing on
<hkaiser>
akheir: simbergm had some complaints about buildbot
<hkaiser>
Abhishek09: hey
<akheir>
hkaiser: I see
<akheir>
simbergm: Hi, I moved buildbot home to /buildbot
<parsa>
how do you want to do that? people use different platforms. you can't reuse what you built for ubuntu in fedora for instance
<parsa>
you'd need to create many wheel files for different distros in the end
<parsa>
but there should also be a generic version that builds everything
<Abhishek09>
i think in that binding it is desinged for every platform using cmake
<parsa>
take pandas for instance. it doesn't have a wheel file for Alpine linux, so it compiles and builds everything it needs on Alpine Linux. However, in Debian it just downloads a wheel file that has everything prebuilt in it.
<Abhishek09>
i have seen that code
<zao>
There is the manylinux1 environment for generic Linux binary wheels. It’s way too low in compiler and standard versions for HPX I believe.
beauty has joined #ste||ar
<beauty>
hello
<beauty>
I am hello.
<zao>
I ported some of my code to C just to have something that would have a chance of building there.
<K-ballo>
hello hello beauty hello
beauty has quit [Quit: WeeChat 2.2]
<zao>
parsa: pip shot me in the feet the other day, cached wheels silently in my home dir, even as I was in a completely different virtualenv and could have had a different python :)
<Abhishek09>
twine will generate .whl for generic type
<Abhishek09>
hello parse where r u?
<hkaiser>
parsa: build everything including HPX?
<hkaiser>
well, that sounds like fun
<hkaiser>
and boost...
Amy1 has joined #ste||ar
<Amy1>
who know other HPC channel?
<hkaiser>
Amy1: what other HPC channel?
<Amy1>
irc channel of hpc
<Amy1>
such as mpi openmp and so on.
<Amy1>
lol
nikunj has joined #ste||ar
<Abhishek09>
build dependency with pip by which method it easy
<Abhishek09>
requirement.txt or with setup.py
<Abhishek09>
hi nikunj you are gsocer
<nikunj>
Hi Abhishek09!
<nikunj>
yes I am
<Abhishek09>
this year u are partcipating?
<zao>
Maybe part of the work is figuring out the pros and cons of each :)
<nikunj>
Abhishek09, most likely I'm not. I'll be working with STE||AR at LSU instead of the gsoc way :)
<zao>
New minions? Neat!
<Abhishek09>
lsu means?
<nikunj>
zao, as always. I was interested in STE||AR from the beginning ;)
<nikunj>
btw, I went through the gsoc project list. Isn't the pip package for phylanx a bit too easy? I mean will it really take that long to generate a pip package?
<zao>
nikunj: I don't blame you, lovely people :)
<parsa>
zao: i don't trust virtualenv or anaconda. they prevent each other from destroying the other. i use docker if i want to experiment
<nikunj>
Abhishek09, lsu is Louisiana State University
<parsa>
hkaiser: yeah
<parsa>
hkaiser: build everything including HPX's dependencies
<parsa>
if need be
<Abhishek09>
nikunj have u any idea about building dependency with package?
<zao>
I was testing some installation instructions for an end user for a package that referenced some of our cluster modules. Second clean environment I tried it in installed suspiciously fast :D
<Abhishek09>
i going through reqirement.txt
<parsa>
zao: --no-cache-dir?
<zao>
parsa: Found and documented that one, indeed.
<zao>
Some day I'll check if we can force it by default on our systems, also --system-site-packages for virtualenv.
<nikunj>
Abhishek09, I usually prefer requirements.txt, but both ways is fine
<zao>
I still feel a bit of loathing toward setuptools for their whole "if CXXFLAGS are set outside in env, completely ignore the ones set inside"
<K-ballo>
that sounds... reasoanble
<zao>
It's extra fun if thing you build are making both binaries and libraries, so you can't even copy the options out into your env :D
<K-ballo>
the fact that it is hidden state is unfortunate, but otherwise it is fair
<Abhishek09>
but its not work with pip freeze @nikunz
<zao>
Might have been addressed by now, the thing might've used some older version for reasons.
adityaRakhecha has joined #ste||ar
aserio1 has joined #ste||ar
<Abhishek09>
hello aditya
aserio has quit [Ping timeout: 252 seconds]
aserio1 is now known as aserio
<Abhishek09>
nikunj are u here?
<nikunj>
Abhishek09, yes I am
<nikunj>
what do you mean by "it won't work with pip freeze"?
<Abhishek09>
nikunj i am talking about this "$pip freeze > requirements.txt."
<nikunj>
I still didn't get you, you can always do pip freeze > requirements.txt to generate a requirements.txt for a project
<parsa>
nikunj: creating a pip package for phylanx sounds easy but it is laborious. create a generic wheel for unix systems, several prebuilt and updated wheels for different OSs. which has to be created automatically and tested
<Abhishek09>
nikunj tell me safe way to build these dependencies.
<nikunj>
parsa, thinking it that way does make it a laborious task.
<nikunj>
Abhishek09, I'd say you research a bit yourself :)
<zao>
"Didn't think of that..." is a saying I use a lot with my colleagues.
<nikunj>
zao, XD
<parsa>
:)))))
<diehlpk_work>
Abhishek09, One important part of Google Summer of Code is to identify the problem and present a good solution
<diehlpk_work>
You do not need to implement things at this stage
<diehlpk_work>
Sometimes you’ll want to use packages that are properly arranged with setuptools, but aren’t published to PyPI. In those cases, you can specify a list of one or more dependency_links URLs where the package can be downloaded, along with some additional hints, and setuptools will find and install the package correctly.
<parsa>
Abhishek09: no clue what you're talking about... the only python packages phylanx dependeds on are numpy and pybind. you have to build everything else yourself
<diehlpk_work>
Yes, but you have to provide the link to the package, but also scripts or python setup files to compile it
<parsa>
hpx is not a python package, neither is highfive, hpx, boost, and others that you need to run a phylanx program
<diehlpk_work>
It is not adding some links there, if so we would have done it by ourself
<Abhishek09>
yes
<diehlpk_work>
along with some additional hints, and setuptools will find and install the package correctly.
<diehlpk_work>
This is the tricky part
<Abhishek09>
Yes i want to say that only
<diehlpk_work>
To provide the setuptools for all dependencies
<Abhishek09>
I have prove via links
<Abhishek09>
it comprise both which is available on pypi or not
<simbergm>
diehlpk_work: yep, easier to remove them
<simbergm>
aserio: I think I'm finally done (that new wordpress editor is a pita...)
<Abhishek09>
hey parsa these are dependencies i seen code
<diehlpk_work>
Abhishek09, Boost? Hwloc?
<diehlpk_work>
jmelloc or gperftools?
<Abhishek09>
these are not mentioned
<diehlpk_work>
You will need them for hpx
<parsa>
Abhishek09: yes they are dependencies, but they are not Python packages nor have any direct relevance to Python code. you have to build them into your package
<Abhishek09>
Now i have focus on phylanx
<diehlpk_work>
Sure, but Phylanx will not work without these dependencies
<diehlpk_work>
Ok, but for some there is no setuptool file available
<Abhishek09>
it is available
<Abhishek09>
setup.py.in
<parsa>
Abhishek09: "If you’re using Python, odds are you’re going to want to use other public packages from PyPI or elsewhere." HighFive, HPX, etc are not public Python packages and don't need to be. you just need them for Phylanx
<diehlpk_work>
Abhishek09, As far as I know there is is no setup.py for hpx
<diehlpk_work>
And possible not for blaze
<diehlpk_work>
For the others deps, I do not know
<Abhishek09>
it is available for phalanx
<diehlpk_work>
yes, but the setup.py for phylanx will need hpx, boost, hwlox, blaze to build
<Abhishek09>
we have to create package for hpx if not there
<nikunj>
diehlpk_work, out of curiosity. Will we create multiple build types for phylanx as well using the pip package?
<diehlpk_work>
nikunj, This is all part of the project to come up with some solution how to provide a pip package for phylanx
<diehlpk_work>
I think that having a release build is fine for the beginning
<nikunj>
Ah nice! I still remember the first time I had to build phylanx, it was a bit tedious job since I forgot to enable a few options while building it and I forgot to add the flags. I had to rebuild the whole thing :/
<diehlpk_work>
Having Phylanx installed on any Linux OS would be goof enough for this project
<diehlpk_work>
Doing pip install phylanx on Fedora after 3 months would be really cool
<diehlpk_work>
If there is time left we can add other OS
jaafar has joined #ste||ar
<nikunj>
having a package for hpx would be cool too! Think about doing apt install hpx or yum install hpx or similar :o
<diehlpk_work>
nikunj, Fedora and OpenSuse have a hpx package
<nikunj>
Ohh! I didn't know.
<diehlpk_work>
daissgr, Is there anything to be fixed on Power( left?
<diehlpk_work>
*Power9
Abhishek09 has quit [Ping timeout: 256 seconds]
<Yorlik>
Where can I find info about HPXs internal messaging?
<Yorlik>
I am thinking about implementing a memnory areana linked list thing for messages which are automatically sorted by object, but maybe HPX can help me here?
akheir has quit [Quit: Konversation terminated!]
akheir has joined #ste||ar
<heller_>
Yorlik: we don't have such a document
<Yorlik>
Hmm. I just want to keep DRY ;)
<Yorlik>
An reimplementing the wheel is not really my thing as fun as such a task could be.
<heller_>
well, HPX doesn't do any sorting of messages
<Yorlik>
I was thinking about using a memory arena to have linked lists in it as efficient per object queues when having thousands or tens of thousands of objects
<Yorlik>
Basically no sorting necessary with that
daissgr has quit [Ping timeout: 246 seconds]
<heller_>
why do you need to store the messages?
<Yorlik>
Its just a buffer. I want to process messages to objects during an update cycle(frame), when the object is already cached.
<Yorlik>
So - I am not processing messages in order, but batched
<Yorlik>
messages are sorted by the type of component they affect
<heller_>
that's more or less implemented inside of HPX already
<Yorlik>
It's part of the entity component system I try to implement for maximum cache frienbdliness
<Yorlik>
That's why I asked ->I guessed HPC might have a bunch of goodies in store already.
<Yorlik>
I would like to load an object and then batch process all messages ready for it. GHow would I use HPX for that ? Lets say I'm inside a loop which runs over components of an object which receives messages (async calls) through its public interface.
<Yorlik>
Basically a component is just a bunch of related data
<Yorlik>
and the objects only hold pointers to their domponents which live in arrays
<Yorlik>
During the update cycle the systems zip over the component arrays they are responsible for.
<Yorlik>
When the loop is at a specific componenbt I need to pull all messages for that component that have arrived in that cycle
<heller_>
well, you just do a hpx::async<action>(id, ...);
<heller_>
or apply, or whateve
<heller_>
r
<heller_>
the rest will just be handled in the back ;I
<heller_>
gtg, sorry
<heller_>
but HPX will either send out the packages immediately or buffer them for later resends
<heller_>
in addition, you can use parcel coalescing
<Yorlik>
Essentially I' need to understand how to control when and where actions are executed.
<Yorlik>
There is a sort of interest of conflict here.
<Yorlik>
If I just fire everying off async, the data affected might be in or out of the cache.
<Yorlik>
So I will not be cache optimized
<Yorlik>
The advantage ofc is getting rid of sleep states
<Yorlik>
But:
<Yorlik>
If i can batch process all functions/actions which affect ceratin grouips of data, which ideally are smaller than a cache line, then my chances are high that I get cache optimized execution.
<Yorlik>
Thats after all the entire purpose of entity component systems, at least from a performance point of view
<Yorlik>
parallelism still helps here: parallel loops with a reasonable chunck size when zipping over the dat for example
<Yorlik>
but i want the processing of the data be condensed at a point in time, where I have the ata in the cache
<Yorlik>
s/ata/data/g
<Yorlik>
Really - nothing beats the advantages of cache friendly code, especially when you can afford to zip over playin old arrays / vectors
<Yorlik>
But that requires to hold off execution and find some sort of batch processing regime.
<Yorlik>
The solution, imo is in the end to have the right granularity