hkaiser changed the topic of #ste||ar to: STE||AR: Systems Technology, Emergent Parallelism, and Algorithm Research | stellar.cct.lsu.edu | HPX: A cure for performance impaired parallel applications | github.com/STEllAR-GROUP/hpx | Buildbot: http://rostam.cct.lsu.edu/ | Log: http://irclog.cct.lsu.edu/
jaafar has quit [Ping timeout: 244 seconds]
diehlpk has quit [Ping timeout: 268 seconds]
eschnett has joined #ste||ar
eschnett has quit [Quit: eschnett]
<hkaiser>
Yorlik: that sounds interesting
<Yorlik>
hkaiser - would you have a moment in voice for that? It's tricky, but I believe worth it.
<hkaiser>
Yorlik: if the target object is a component, then I think we have sufficient customizatio npoints in place for you to insert such a object base receive buffer
<Yorlik>
typing everything related to this is painful.
<hkaiser>
I understand
<Yorlik>
We could use skype or this website of yours - skype works now on my side - I sent you a message
<hkaiser>
you can also avoid creating a hpx thread for each of those things but handle all in one go
<Yorlik>
There are several vaiables involved, and I believe there is no 100% ideal solution.
<Yorlik>
Basically what I want to do is a form of temporal data locality (the cache)
<Yorlik>
The devil will be in the details of the data and operations on it needed.
<hkaiser>
skype just dies for me atm
<hkaiser>
no idea what's wrong :/
<Yorlik>
I believe understanding the parameters for that and then finding individual solution is probably the best way, but form of message pooling would surely help
<Yorlik>
website?
<hkaiser>
yah
<Yorlik>
what was the url again?
<hkaiser>
appear.in/stellar-group
Yorlik has quit [Read error: Connection reset by peer]
hkaiser has quit [Quit: bye]
parsa is now known as parsa_
nikunj has quit [Remote host closed the connection]
nikunj97 has joined #ste||ar
<nikunj97>
Looking into the gsoc project list, I think there is some sort of typing error for the project "Large File Support for HPX OTF2 Trace Visualization
<nikunj97>
". diehlpk_work could you please look into it
nikunj1997 has joined #ste||ar
nikunj97 has quit [Ping timeout: 250 seconds]
K-ballo1 has joined #ste||ar
K-ballo has quit [Read error: Connection reset by peer]
K-ballo1 is now known as K-ballo
nikunj1997 has quit [Ping timeout: 272 seconds]
daissgr has joined #ste||ar
nikunj97 has joined #ste||ar
hkaiser has joined #ste||ar
akheir has quit [Ping timeout: 252 seconds]
RostamLog has quit [Ping timeout: 255 seconds]
RostamLog has joined #ste||ar
<detan>
Having problems compiling examples/compute/cuda/data_copy.cu with hpx1.2.0/boost1.68/clang-7
daissgr has quit [Ping timeout: 268 seconds]
<detan>
boost_1_68_0/boost/system/error_code.hpp:241:30: error: exception specification of overriding function is more lax than base version
<K-ballo>
simbergm: the guided pool executor one looks like a genuine break to me
<detan>
heller_: But but but i was using 1.69 and you guys said I should use 1.68 instead... :(
<heller_>
detan: I didn't ;)
<detan>
lol... true...
<heller_>
current master and cuda is a bit broken though
<heller_>
but I think I mentioned that once
<detan>
heller_: yeah... I noticed. That's why I am trying with clang.
<heller_>
probably not much difference
<heller_>
I submitted a WIP PR, which still has some debugging output etc in it, which *should* work with nvcc
<heller_>
yes
<detan>
heller_: you mean cuda stuff still experimental?
<heller_>
what is your interest?
<detan>
heller_: oh... I see...
<heller_>
as said, the PR should work
<detan>
heller_: well, I was wondering if I could switch thrust to hpx. Very interested in the distributed part.
<heller_>
ok
<heller_>
the HPX CUDA integration certainly isn't as nice as what you get from thrust
<heller_>
for the distributed part, you could just mix and match thrust and HPX
<detan>
heller_: developing a framework for particle simulation, and currently using thrust. Have looked into Kokkos as well.
<detan>
heller: I see, but will be in the future right? ;)
<detan>
heller_: mixing both makes sense...
<heller_>
depends ... if there is enough inertia to get everything up to speed: why not. I don't think we have anyone dedicated working on the HPX CUDA integration, so things will move slowly there, unless you are willing to invest time there
<heller_>
the big advantag of HPX over Kokkos/Thrust however is the integration of futures
<heller_>
the big advantage of Thrust over HPX is certainly that their parallel algorithms are better tuned for GPUs ;)
<detan>
heller_: Yeah, I would be willing to dedicate some time for that. I liked the idea of working close to standard proposals...
<heller_>
indeed
<heller_>
we are an open community, every contribution is very much welcome
<detan>
Have you considered using thrust as the backend for now?
<detan>
There not been much developing going on thrust though.
<heller_>
you have to ping wash[m] for that
<heller_>
he is the maintainer of thrust
<heller_>
I'd very much welcome this
<detan>
lol... didn't know that. :P
<detan>
yeah, I am already subscribed to that PR. Thanks!
<detan>
So, I will try to mix thrust and hpx for now... and try to contribute to make hpx GPU better.
<heller_>
perfect!
<detan>
Brilliant, thanks!
<simbergm>
K-ballo: it sure does
<simbergm>
did one of your PRs change some (de)tuplification/unwrapping or something like that
<simbergm>
it seems to be expecting a tuple<a, b> overload where there is only one for (a, b)
<K-ballo>
no
<K-ballo>
the impression I get is the guided executors were relying on "unspecified amounts of instantiation" during overload resolution, but I can't tell for sure
<K-ballo>
I tried msvc for that example, and it is hitting some instantiation of the executor members that I don't think it should be hitting..
<K-ballo>
but half of those are sfinae unfriendly, so they can't even be sniffed
<K-ballo>
I need jbjnr_ to explain me which instantiations are expected by the design
<Abhishek09>
hello we can use boost mpi instead of boost?
<hkaiser>
what for?
<zao>
Abhishek09: Please note that Boost.MPI is a an individual Boost library for doing fancier C++ communication with an underlying MPI library.
<zao>
In particular, stuff like doing serialization with registered types and whatnot.
<zao>
We use several Boost libraries, but not Boost.MPI to my knowledge.
<hkaiser>
heller_: from the write up you sent (wrt ffwd scheduler), do I read it correctly that it does not reach the perf of our currently used ones?
<heller_>
hkaiser: correct
_bibek_ has quit [Quit: Konversation terminated!]
<heller_>
hkaiser: I think it has some potential nevertheless
<heller_>
quote from last night (not from me): "Hey, do you know this library called HPX?"
<hkaiser>
heh
<hkaiser>
somebody asking you this question?
<hkaiser>
simbergm: isocpp and reddit are up now
<hkaiser>
heller_: sure, the ffwd stuff has potential, needs more work, though
<heller_>
yup
<heller_>
not sure how much she is willing to invest
<hkaiser>
sure, understand
<heller_>
hkaiser: yeah, someone asked me
<hkaiser>
thesis done -- forget what you did ;-)
<hkaiser>
heller_: cool - what did you respond?
<heller_>
hkaiser: "yes, I develop it"
<heller_>
the response: "Oh are you Hartmut Kaiser?"
<hkaiser>
I can visualize you standing there grinning ;-)
<hkaiser>
uhh, didn't know I was that infamous
<heller_>
;)
<heller_>
the only problem: talk scheduling fail
<heller_>
I will give my talk in parallel to a talk called: "Concurrency und Parallelität mit C++17 und C++20/23"
<hkaiser>
who's giving that talk?
<hkaiser>
I wouldn't worry about this, things will be fine
<heller_>
rainer grimm
<heller_>
na, all good, I think
<heller_>
two talks about HPX at the same time in any case ;)
<hkaiser>
is Rainer talking about HPX?
<K-ballo>
he had many articles about the old concurrency ts
<hkaiser>
nod
<heller_>
yeah, he mentioned HPX from time to time
<heller_>
I'd guess he switched to the intel library though
<hkaiser>
you mean parallel algorithms?
<heller_>
yeah
<hkaiser>
nod
eschnett_ has joined #ste||ar
Abhishek09 has quit [Ping timeout: 256 seconds]
aserio has joined #ste||ar
eschnett_ has quit [Quit: eschnett_]
<simbergm>
K-ballo: yep, jbjnr_ is most likely be away this whole week, so you'd better ping him sometime next week
<simbergm>
thanks for fixing the rest
<simbergm>
hkaiser: I promise I'll do reddit one day...
<simbergm>
and thanks! ;)
<hkaiser>
simbergm: I'm here to serve ;-)
<simbergm>
lol
Abhishek09 has joined #ste||ar
hkaiser has quit [Quit: bye]
detan has quit [Quit: Page closed]
<Abhishek09>
hello everyone
<Abhishek09>
is anyone there?
<zao>
Probably.
eschnett_ has joined #ste||ar
diehlpk_work has joined #ste||ar
<diehlpk_work>
simbergm, hpx 1.2.1 passed on f28,f29,f30, and rawhide and I started to push it to the official repos
<simbergm>
woop! nice
<diehlpk_work>
In one week it should be in the stable repo
<diehlpk_work>
Ok, they update openmpi and we might need a patch
<diehlpk_work>
From latest stable 2.x to 3 or even 4
<Abhishek09>
hey diehlpk_works can i use binary wheels for boost
<diehlpk_work>
Abhishek09, What is a binary wheel?
<Abhishek09>
in this User does not need to know anything about setup.py, which is used for building and installing source code. User just needs to download and install a binary wheel package.
<Abhishek09>
Tools like auditwheel then take care of bundling external library dependencies into the binary wheel, so end-users need not have the library installed to use your package.
<Abhishek09>
hey diehlpk_work are you here?
<diehlpk_work>
Abhishek09, Yes, but in a phone call
<diehlpk_work>
yes, this would work
<diehlpk_work>
you would need to do this for all deps
<Abhishek09>
hpx also?
<diehlpk_work>
yes
<diehlpk_work>
all feps of phylanx
<diehlpk_work>
*deps
<Abhishek09>
similarly we do for dependency of hpx !
<diehlpk_work>
yes
<diehlpk_work>
this is what makes the project difficult
<Abhishek09>
Anything more needed in making pip package?
<diehlpk_work>
Abhishek09, This would be your task to abswer this questions
<diehlpk_work>
One task of GSoC is that the student comes up with an proposal
<diehlpk_work>
You should prepare one proposal and propose an solution how to generate a pip package for phylanx
<Abhishek09>
i do later when i plan entire task to make pip package!
<Abhishek09>
it inrcrese the chance of selecting my proposals
<Abhishek09>
Anything more you want to say ?
<diehlpk_work>
No, I think you go in the right direction
Abhishek09 has quit [Ping timeout: 256 seconds]
adityaRakhecha has joined #ste||ar
<adityaRakhecha>
I am interested in this year's GSOC project `Test Framework for Phylanx Algorithms`. I am good with prerequisites. How should I proceed ?
<K-ballo>
I feel tempted to remove the tag changes from the ready-future PR
hkaiser has joined #ste||ar
aserio1 has joined #ste||ar
<adityaRakhecha>
Have I done something wrong? Nobody is replying.
aserio has quit [Ping timeout: 250 seconds]
aserio1 is now known as aserio
<hkaiser>
adityaRakhecha: have some patience please - I'm sure somebody will get back to you
<adityaRakhecha>
Sure. I asked yesterday a question also, that is why I wrote this. Sure I will wait. :)
<K-ballo>
IRC is slow
<zao>
Proper etiquette mandates me to reply "you're slow!" ;)
<K-ballo>
something about a trout slap
khuck has joined #ste||ar
<khuck>
hkaiser: howdy - I am submitting the DD application. Is the OctoTiger project name "OctoTiger"?
<khuck>
aserio: ^^
<khuck>
aserio: what is the current funding source for OctoTiger? Are there previous funding sources?
<aserio>
khuck: I think the name is Octo-Tiger
<khuck>
thanks
<aserio>
one sec on the funding (in a meeting)
<khuck>
no prob
<khuck>
(that kind of information *should* have been in the most recent paper by Dirk/Gregor, btw ;)
<khuck>
aserio: if the information in the IJHPCA 2018 paper is correct, I'll use that.
<aserio>
khuck: "The development of the OctoTiger code is supported through the National Science Foundation award 1240655 (STAR)."
<adityaRakhecha>
Just out of curiosity, is there anyway to get conected to IRC like everytime. I lost a lot of conversation during college hours.
<khuck>
aserio: thanks
<aserio>
khuck: this is what originally funded Octo-Tiger
<zao>
adityaRakhecha: Some people run bouncers like ZNC, which keep the connection running while you're not there and give you some backlog when you reconnect.
<zao>
Web clients like mibbit keep a shared backlog between participants.
<K-ballo>
those [m] in nicknames come from some android app, I understand
<zao>
I use irccloud myself, but that requires a paid account for persistence.
<zao>
There's of course also the choice of running a client 24/7 on some machine somewhere :)
<diehlpk_work>
adityaRakhecha, Have you compiled Phylanx already?
<adityaRakhecha>
Cool I will look into them. @zao @aserio
<aserio>
khuck: Currently CCT is funding Dominic's work... so you could also add something to the effect of "support for Octo-Tiger's development comes from LSU's Center for Computation & Technology"
<aserio>
adityaRakhecha: the irclog is an in-house solution
<adityaRakhecha>
@diehlpk_work No I have not compiled. I have basic knowledge of datascience and building my first MP Neuron model for a contest on Kaggle.
<diehlpk_work>
adityaRakhecha, Compiling Phylanx and HPX would be a nice first step
<adityaRakhecha>
@aserio Ok. I will try this then.
<diehlpk_work>
Also read the hints for the successful proposal
mbremer has joined #ste||ar
<adityaRakhecha>
@diehlpk_work Cool on my way then.
<diehlpk_work>
You would need to submit a strong proposal for this task
<mbremer>
hkaiser: yt?
<adityaRakhecha>
@diehlpk_work Will you help and guide?
<adityaRakhecha>
@diehlpk_work Then I am ready for it.
<diehlpk_work>
I can only advice you to read documentation and try to find things out by yourself
<diehlpk_work>
Google Summer of Code is not like an internship where we tell you what to do,
<diehlpk_work>
You have to come up with a good plan
<adityaRakhecha>
Yes I understand.
<Yorlik>
Can someone explain to me wha this explodes: "auto P = std::make_shared<game::controller> ( gamecontroller );" ??? ==>gamecontroller is a HPX Component and I need a reference to it.
<K-ballo>
gamecontroler is a... (reference to an) instance of an hpx component of type game::controller ?
<Yorlik>
yes
<zao>
Note that std::make_shared<T> constructs a new T and returns a shared_ptr to that new object.
<K-ballo>
your code is asking for a new instance that's a copy of the one given
<K-ballo>
it sounds like you already have the reference you want
<Yorlik>
How is make_shared creating a copy?
<Yorlik>
I just need a pointer
<Yorlik>
gamecontroller is the compoentn, not a pointer to it
<zao>
make_shared constructs a new object (together with a new refcount block). In your case, it's probably using the cctor.
<zao>
Akin to, but not equal to, std::shared_ptr<T>(new T(gamecontroller));
<Yorlik>
I thought make_unique only creates an object.
<Yorlik>
game::controller gamecontroller;
<K-ballo>
all those create objects
<Yorlik>
that's how I create gamecontroller
<K-ballo>
you wouldn't actually creating a component there, just a regular object
<zao>
If you have an existing shared_ptr, you copy it to get a co-owning shared_ptr. If you have a weak_ptr, you lock it to get a new co-owning shared_ptr.
<Yorlik>
So - I need this other version of make shared with the type parameter?
<K-ballo>
you don't need make_shared at all (if you are dealing with hpx components)
<zao>
If you have a variable with automatic storage duration, you can't get a meaningful shared_ptr to it.
<Yorlik>
How could I get a safe reference to a component then?
<K-ballo>
hpx::get_ptr was it?
* Yorlik
is puzzled
<Yorlik>
Ohh ..
<zao>
If you have a member of some object held by shared_ptr, you can use the aliasing constructor of shared_ptr to use the refcount of the outer object.
<K-ballo>
or hpx::components::get_ptr? something like that
<zao>
But in the case of HPX components, there's probably HPX machinery to do things, like K-ballo said.
<adityaRakhecha>
<diehlpk_work> getting this while building blaze CMake Error at /usr/share/cmake-3.5/Modules/FindPackageHandleStandardArgs.cmake:148 (message): Could NOT find Threads (missing: Threads_FOUND)
<Yorlik>
I'll check that out - get sth with 3 overloads - I'll find a way.
<Yorlik>
Thank you !
<K-ballo>
yep, hpx components are already "shared", because they are remote
<zao>
Gah... Singularity has become quite a lot worse with version 3.0 :(
<zao>
The stuff I used to do for my HPX building now requires root access to run.
<zao>
Time to look for alternative container runtimes, I guess :(
jaafar has joined #ste||ar
<Yorlik>
LOL - I just need gamecontroller.get_id() and pass that around.
<hkaiser>
mbremer: here
<mbremer>
hkasiser: So in profiling my flat mpi code with vtune. I get a bunch of calls in [vmlinux], which I believe is kernel related. Do you have any idea what could be the source of this?
<mbremer>
The good news is that I think it means that I believe that the hpx implementation is 1.3 times faster than flat MPI on the knights landing.
Abhishek09 has joined #ste||ar
<Abhishek09>
hello guys
<Abhishek09>
Anyone participating in GSoC this year
aserio has quit [Ping timeout: 250 seconds]
<Abhishek09>
is there anyone working on pip package?
<Abhishek09>
hey parsa: I am buiding package using wheels
<diehlpk_work>
adityaRakhecha, Google is your friend
<hkaiser>
mbremer: nice
hkaiser has quit [Read error: Connection reset by peer]
hkaiser has joined #ste||ar
<hkaiser>
mbremer: and no idea what vtune is trying to tell you, sorry
<diehlpk_work>
adityaRakhecha, A major part of GSoC is finding solution and this implies to try to understand error messages and try to google them
Abhishek09 has quit [Ping timeout: 256 seconds]
david_pfander has quit [Ping timeout: 268 seconds]
Abhishek09 has joined #ste||ar
adityaRakhecha has quit [Ping timeout: 256 seconds]
<hkaiser>
Yorlik: id_type id = new_<Component>(where); creates a new instance of Component on locality 'where'
<hkaiser>
std::shared_ptr<Component> p(hpx::get_ptr<Component>(id); gives you a pointer to the instance you just created
<Yorlik>
I think I am messing up global and local use a bit.
<hkaiser>
get_ptr<> works on only if id refers to an object that is local to its invocation (same locality)
<Yorlik>
I think I'll add that to my little log :)
<hkaiser>
well, get_ptr<> actually returns a future<shared_ptr<>>
<hkaiser>
but you don't need to use get_ptr<> as you can always use 'id' to refer to the instance, get_ptr<> would give you some optimization, but prevents the object from being migrated as long as you hold a shared_ptr to it
<hkaiser>
I also believe there is a sync overload: auto p = get_ptr<Component>(hpx::launch::sync, id); here 'p' is not a future, but the shared_ptr<> directly
aserio has joined #ste||ar
<Yorlik>
BTW - are you logging the IRC somewhere so one could read up?
<hkaiser>
also, to round things up: new_<Component> returns a future<id_type>, but if used with hpx::launch::sync as its first argument, it gives you the id_type directly
<Yorlik>
Thanks a ton !
eschnett_ has quit [Quit: eschnett_]
<Yorlik>
Just managed to explode it: " assertion '!naming::detail::has_credits(gid)' failed: HPX(assertion_failure)"
<hkaiser>
wow
<hkaiser>
that I want to see myself ;-)
<Yorlik>
If you want I can share my screen in a debugging session
<Yorlik>
Found the offending line already, but still don't indrtstand why
<hkaiser>
no time to do screen sharing right now, but what you have should not happen
<hkaiser>
can you share the code that caused that?
<Yorlik>
Its a simple get id call on my component after it was created
khuck has quit [Remote host closed the connection]
khuck has joined #ste||ar
aserio has joined #ste||ar
khuck has quit [Ping timeout: 250 seconds]
nikunj has joined #ste||ar
eschnett_ has quit [Quit: eschnett_]
eschnett_ has joined #ste||ar
<Yorlik>
Is there any way to call new_ with hpx::launch::sync on local?
<heller_>
yes
<Yorlik>
How would you do that ? I got red wiggles all around he place
<Yorlik>
I assume the local locality would be implicit in the call, right?
<heller_>
hpx::local_new
<Yorlik>
Ohhh
<Yorlik>
thanks
<heller_>
but you'll always get a future back
<Yorlik>
I was searching full text for new_
<heller_>
NB.: conference get together with free beers don't call for good advice
<zao>
The underscore on new_ is there only because it'd be a reserved keyword otherwise.
khuck has joined #ste||ar
<heller_>
and people don't seem to like to overload operator new in a proper way ;)
<Yorlik>
IC
<Yorlik>
I assume getting the id would always be async, since it calls into the AGAS logic?
<Yorlik>
Oh I see - local_new also delivers a future - just that it's very present ..
<heller_>
yes
<Yorlik>
So - hpx::finalize is simply like the old thread.join(), just in the world opf hpx. means I can launch all my asyncs and just forget abot them and as long as any async loops are running finalize will just sit and wait.
<hkaiser>
Yorlik: finalize does not wait for things to finish, it just flags the runtime to exit whenever its done doing things
<hkaiser>
gtg
hkaiser has quit [Quit: bye]
<heller_>
init, or stop waits until everything is done
khuck has quit [Remote host closed the connection]
khuck has joined #ste||ar
<Yorlik>
IC - so in my setup finalize exits out of hpx_main and hpx_stop in main waits..
<Yorlik>
Is there any special reason why you created hpx::cout ?
<Yorlik>
Any race prevention or something?
<heller_>
to have all output on locality 0
<heller_>
hitting the hay as well
<Yorlik>
Good night !
mbremer has quit [Quit: Leaving.]
eschnett_ has quit [Quit: eschnett_]
nikunj has quit [Ping timeout: 246 seconds]
nikunj97 has quit [Ping timeout: 268 seconds]
aserio has quit [Quit: aserio]
K-ballo has quit [Quit: K-ballo]
K-ballo has joined #ste||ar
nikunj has joined #ste||ar
hkaiser has joined #ste||ar
eschnett_ has joined #ste||ar
<K-ballo>
since 1.2 our debug symbols have shrink by more than 600mb
<K-ballo>
our binary releases by ~3mb
<hkaiser>
wow
<hkaiser>
well done!
<K-ballo>
and that's with moving _more_ stuff into sources (though not as much as I would have liked)
khuck has quit [Remote host closed the connection]