hkaiser changed the topic of #ste||ar to: The topic is 'STE||AR: Systems Technology, Emergent Parallelism, and Algorithm Research | stellar.cct.lsu.edu | HPX: A cure for performance impaired parallel applications | github.com/STEllAR-GROUP/hpx | Buildbot: http://rostam.cct.lsu.edu/ | Log: http://irclog.cct.lsu.edu/
jbjnr_ has quit [Read error: Connection reset by peer]
adi_ahj has joined #ste||ar
adi_ahj has quit [Quit: adi_ahj]
nikunj has joined #ste||ar
nikunj97 has joined #ste||ar
nikunj has quit [Ping timeout: 240 seconds]
adi_ahj has joined #ste||ar
hkaiser has quit [Quit: bye]
nikunj97 has quit [Ping timeout: 272 seconds]
nikunj97 has joined #ste||ar
adi_ahj has quit [Quit: adi_ahj]
adi_ahj has joined #ste||ar
nikunj has joined #ste||ar
nikunj97 has quit [Ping timeout: 268 seconds]
<tarzeau>
is anyone aware of manpages for hpxcxx/hpxrun(.py), i wouldn't mind a copy ?
<tarzeau>
now i could download master and apply pr and build, but if someone has a copy of the manpages i'd just like to wget/curl them for the debian package
<tarzeau>
hah merged mar 19, 1.3.0 released 2 months later, i'll just build it and have it :)
adi_ahj has quit [Quit: adi_ahj]
<mdiers_>
jbjnr_: :-)
adi_ahj has joined #ste||ar
nikunj97 has joined #ste||ar
nikunj has quit [Ping timeout: 268 seconds]
adi_ahj has quit [Client Quit]
jbjnr has joined #ste||ar
jbjnr has quit [Client Quit]
jbjnr has joined #ste||ar
<jbjnr>
mdiers_: there are two commit on this PR branch https://github.com/STEllAR-GROUP/hpx/pull/4306 - one is a unit test created from your stub of yesterday. Ther was a bug in one of the executor forwarding overloads so the thread num hint was lost. The test shows how to use it with the right scheduler.
nikunj97 has quit [Ping timeout: 265 seconds]
<simbergm>
tarzeau: note that the manpages are just our usual docs put into manpage format by sphinx, it's not really a great manpage (it's massive)
<heller>
tarzeau: hpxrun.py doesn't really have a good documentation, sorry
<heller>
tarzeau: do you want to use it for something?
<simbergm>
heller: it's most likely for the debian package
<tarzeau>
simbergm: is right, heller just for the debian package
<jbjnr>
can anyone remember who the german chap that was writing an onlone game with hpx was?
<jbjnr>
^online
<tarzeau>
simbergm: i noticed it's just a single huge page with the ordinary doc
<heller>
jbjnr: yorlik
<heller>
jbjnr: I haven't seen him for a while though
<heller>
tarzeau: does debian require to have manpages?
<tarzeau>
heller: well no, but it spits out warnings if you don't ship them
<tarzeau>
and then before a package gets into the archive, someone @debian.org reviews it, and in the new queue ftp master reviews debian/copyright, but i hold on the plan to get it in by the end of feb 20.04 so ubuntu 20.04 will have it as well
<tarzeau>
i just use symlinks of hpxrun.1 and hpxcxx.1 to hpx.1, that's good enough
<heller>
ok
<heller>
can you post a screenshot of the manpage?
<tarzeau>
reading the page, is there a plan to switch from slack to self-hosted matrix?
<heller>
not that I am aware of
<simbergm>
tarzeau: I'd love that but I'm probably the only one besides you :D with slack we're kind of riding with the rest of the c++ community
<simbergm>
and I think irc is not helping us build a community
<heller>
will a self hosted matrix do that?
<tarzeau>
no idea how the adoption of matrix/slack is from the c++ community, but if something is already popular, it's probably not very easy to move them to somewhere else (unless there's good excuse to switch)
<tarzeau>
i mean look at all the python2 users, python3 is there since 10+ years
<tarzeau>
likewise with windows, vs macOS/linux
<heller>
there's quite some traffic on the cpplang slack
<heller>
so a lot of people go there
<heller>
why do we need to have it self hosted? Isn't it good enough to have it on matrix.org?
<simbergm>
self-hosted is not really important to me, I like matrix.org
<simbergm>
slack is the best place to be close to the rest of the c++ community it seems like
<mdiers_>
simbergm: do you have a multi account client for linux? riot supports only one account
<simbergm>
mdiers_: no, haven't had the need for it yet
<simbergm>
I use riot.im
<tarzeau>
mdiers_: you can probably just set HOME=somethingelse to have multiple accounts with a client like https://github.com/Nheko-Reborn/nheko ?
<mdiers_>
tarzeau: yes, but riot is the only one with audio call and with the first approach of screen sharing
<mdiers_>
simbergm: sounds like a simple workaround, multiple tabs via matrix.im
<simbergm>
mdiers_: right, firefox container tabs would probably work well
<mdiers_>
jbjnr: i can reproduce the bug
<jbjnr>
mdiers_: ?
<jbjnr>
mdiers_: did you try the new test?
<jbjnr>
heller: thanks. yorli,k. I remember now. When he next pops up. I have what he wanted working - it was the same problem as mdiers_ had.
<jbjnr>
I'm hoping the new test give the correct output, with threads bound to cores in a round robin periodic cycle. They should be launched on numbered threads and then report the correct thread id
<primef>
Good morning to everyone, I hope I'm in the right channel to ask my question, otherwise please direct to the right place. I'm working with HPX for a research project in my university and currently I'm using the tutorials to get to know HPX. However I run across a strange behaviour. The transpose_numa_block example from GitHub, runs good on one
<primef>
node, however drastically drops performance when executed on two nodes. After searching for a week for a solution, I thought I could ask here for help, as it is the official hpx example for the transpose operation. Thanks in advance!
<jbjnr>
primef: I don't theink the transform numa code has been aintained for years and it may be using some old/obsolete APIs
<jbjnr>
I had a look at it a while ago and decided that it was a shocking mess and I shouldn't waste my time on it
<jbjnr>
however, it might be worth cleaning it up if it serves as a good example and is performing ok on one node
<primef>
Thanks for your feedback! Alright, any hint where I can find a transpose sample (or code snippets) which uses NUMA and distributes on multiple nodes? In particular I was wondering if I have to explicitly distribute on NUMA domains or multiple nodes or if any API would take that job automatically.
hkaiser has joined #ste||ar
<jbjnr>
distributed matrix work is a whole topic in it's own right. We have a project at cscs developing that lkind of thing with an hpx backend, but it's not ready for user's like yourself yet
<hkaiser>
simbergm: \o/ Many thanks for the work on the release!
<hkaiser>
I'd offer you a t-shirt, but you already have one ;-)
<hkaiser>
I can offer you a mug instead, but I wouldn't know how to safely send that to you
<primef>
jbjnr: btw, thanks for the great presentation about HPX I found on Youtube from C++Day 2018. Was a great introduction!
<jbjnr>
primef: there is a numa_binding_allocator that will allow you to put memory on numa nodes
<jbjnr>
how you use it is up to you - there is a matrix tiling example somewhere that puts differnt parts of the matrix on different numa nodes
<jbjnr>
primef: ^c++day - you're welcome
<jbjnr>
but distributing the matrix you'll have to do by hand. I will try to look at the transpose example again and see if it can be cleaned up
<jbjnr>
hkaiser: question?
<hkaiser>
sure
<jbjnr>
dataflow doesn't honour the annotated tasks, but ...
<jbjnr>
if we added a get_annotation overload for a datflow_frame
<jbjnr>
and made it fetch the annotation from the original task?
<jbjnr>
could you make it work? I spent a bit of time looking at dataflow_frame, but got a bit lost
<jbjnr>
same for continuation .then in principle
<jbjnr>
but the continuation, wraps the task multiple times it seems, so it might not be so easy
<hkaiser>
I don't think you need to overload that for dataflow
<hkaiser>
dataflow itself might have to extract the annotation to pass it on, but I'm not sure
<hkaiser>
however as said - give me a small selfcontained example and I'll have a look
<hkaiser>
we need to understand why things don't work first, then fixing it should be easy
<jbjnr>
the problem is that the task that is executed is the dataflow_frame finalize, which calls the annotated_task
<jbjnr>
and the get_function_annotation specializtion for annotated_task does not kick in
<jbjnr>
so we get task name <unknown>
<jbjnr>
same for continuations
<jbjnr>
every time the task is wrapped by something, the task annotation is lost
<hkaiser>
ok
<primef>
jbjnr: Thanks again for the help! It would be really great and a good help, if you could look into the transpose example. In the meantime, I will try to work things out with NUMA, using the allocator. Sorry, for bothering you again, but do you have any suggestion for distributing the Matrix on multiple nodes by hand? Should I e.g. divide it in
<primef>
blocks and allocate them in agas?
<hkaiser>
so for what 'function wrapper' did we miss to implement get_function_annotation?
<simbergm>
hkaiser: do you think we could try to get 4265 in first? it's quite coarse and unless you're planning on making very big changes you might be able to stay within that module
<primef>
hkaiser: so to understand, you represent matrices using blaze, divide them in further smaller matrices also using blaze, and then execute parallel computations on it using hpx?
<simbergm>
we can still split it up into smaller modules later, but having it in would help in continuing with other modules
<hkaiser>
simbergm: sure, I'll fix the conflicts
<hkaiser>
primef: right
<simbergm>
ok, thanks
<hkaiser>
simbergm: you don't change things, just moving files around, right?
<simbergm>
hkaiser: yeah, mostly, as far as I can remember now after a month of not looking at it...
<hkaiser>
k
<simbergm>
if the conflicts look bad I'll redo the module on top of your changes, just let me know
<hkaiser>
nah, go ahead
<hkaiser>
I'm experimenting a lot, currently
<hkaiser>
once things are done everything can be easily updated
<simbergm>
hkaiser: ok, thanks
<simbergm>
the main thing missing is a name for the module, want to have consensus on that before I go and rename and move enverything
<simbergm>
`threading_base` would be my suggestion
<hkaiser>
simbergm: sounds fine, I don't care about the module names too much
<hkaiser>
they have to be unique is all
<simbergm>
very good :)
<simbergm>
hkaiser: btw, did rori catch you to ask about the future stuff?
<hkaiser>
she asked, I tried to answer (not sure how helpful I was), I have not heard back yet
<simbergm>
ok, no worries
<simbergm>
just wasn't 100% sure what you had in mind, but...
<hkaiser>
simbergm: I think we need to factor out the dispatch points for .then
<hkaiser>
and then specialize those in the different modules
<hkaiser>
might require more than just moving things around, however
<simbergm>
the only thing is that future_then_dispatch has to be either defined in the synchronization module (not what we want), or a template parameter (probably what you had in mind)... `template <typename T> using future = detail::future<T, future_then_dispatch<...>>;`
<simbergm>
something like that?
<hkaiser>
I don't think we need to make it a template parameter (at least I hope we can get away without that)
<simbergm>
how would you make the actual implementation available to future_base (or future/shared_future)
<simbergm>
?
<hkaiser>
simply remove all but one .then from future itself, and let the remaining absolutely generic one forward to a dispatch point that can be specialized
<simbergm>
if future/shared_future go in the execution module future_then_dispatch can be a template parameter for future_base
<hkaiser>
future_then_dispatch can live outside of future_base
<simbergm>
I feel like I'm missing something that's obvious to you...
<hkaiser>
its a member type currently, but does not have to be one
<simbergm>
right, but how do you make the specialization available to future_base without a template parameter?
<hkaiser>
simbergm: let me try to sketch something
<simbergm>
a future_then_dispatch declaration is not enough, it needs the definition as well
<simbergm>
ok, thanks
<hkaiser>
well, it could have a default implementation for local tasks only (no launch policy, no executor, just for the simple .then(future&&, F&&)
<simbergm>
yees... is it easier if we discuss this on friday? I feel like we're close but not quite synchronized in our thoughts, or there's some technique that I'm not aware of that we can use
<primef>
hkaiser: thanks. However, we are not allowed to use any additional libraries, except hpx. Thus, we will have to figure it out without using blaze.
<hkaiser>
primef: blaze is just a matrix abstraction, you can easily replace it with your own
<jbjnr>
is there an hpx call today, my calendar thinks so, but it is probably an obsolete recurring invite
<simbergm>
jbjnr: friday (because of pasc review)
<jbjnr>
ok
<simbergm>
hkaiser: aha, thanks
<hkaiser>
simbergm: .then might have to be variadic for this
<hkaiser>
we have similar things with async, you can look there if you want
<simbergm>
so does this work because you have the fallback definition there (just the declaration would not be enough in that module)? but if one defines other specializations on top of that the correct specialization would then be used
<simbergm>
?
<hkaiser>
yes
<hkaiser>
if the specializations are missing we should generate a compilation error, like it's done in future.hpp right now
<simbergm>
weird (to me...)
<simbergm>
is there a name for this pattern/technique?
<simbergm>
so that fallback future_then_dispatch definition would just be a dummy implementation which fails if it's actually used?
<hkaiser>
yes
<hkaiser>
fails compiling
<hkaiser>
using a static_assert or simply don't define the main template
<simbergm>
static_assert(something falsey)
<simbergm>
wait, you mean just the declaration would be enough?
<hkaiser>
well, now I used the same password for matrix as for the irc, how do I rejoin properly
<simbergm>
or is that from matrix?
<simbergm>
apparently yes
<simbergm>
yeah, you need to register as on irc
<simbergm>
or identify yourself
<heller>
hkaiser: you have to tell the Freenode IRC bridge to register your name
<simbergm>
not sure if you can have both open at the same time
<heller>
you can
<hkaiser>
heller: with the same user name?
<heller>
!storepass <your nickserv> pass in the Feenode IRC Bridge status thingy
<heller>
not the same nick, but the same registration
<hkaiser>
ok, will try later, no time now
<simbergm>
tarzeau: very nice, would it be possible for you to automatically apply those fixes and make a pr?
<simbergm>
do you know what tool it's using? we might want to add that to our ci
<K-ballo>
what is matrix/riot?
ritvik has joined #ste||ar
<simbergm>
K-ballo: thanks for answering the poll ;)
<heller>
K-ballo: a free alternative to slack/discord
<jbjnr>
simbergm: I would not use matrix riot because I've never heard of it, but slack would be ok
primef has quit [Remote host closed the connection]
<K-ballo>
do I have to care? does "move" means I'd be forced to leave IRC?
<simbergm>
jbjnr: fair enough :)
<simbergm>
K-ballo: it would mean that, yes, but this is purely hypothetical at the moment (it's just me and heller being excited about it)
<simbergm>
btw, just as a point for "no one uses it": mozilla just moved their messaging from irc to self-hosted matrix
<heller>
K-ballo: for the time being, there's nothing wrong with those who are interested and want to check it out to connect to the IRC channel via riot.im, for example
<heller>
instead of their regular IRC client
<heller>
FWIW, there seems to be slack bridges as well
<K-ballo>
if it's just people using it to connect here, I don't really need to care
<hkaiser>
K-ballo: if things are as seamless as heller claims this irc channel could stay indefinitely
<jbjnr>
how can I see this matrix riot thing?
<jbjnr>
I hardly ever check IRC any more - only when I need to ask something, but I forget to check if others are asking for help
<heller>
hkaiser: correct. I am not sure if there are more benefits to have a "proper" matrix room
<heller>
like video conferencing or the like
<hkaiser>
heller: I'm not worried about video conferencing
<hkaiser>
the only reason why I might consider supporting to migrate is the ease of registration
<hkaiser>
simplifies people to get on
<simbergm>
about registration: having the irc channel be the main channel while having some people connect via riot is not helpful because you have to register for both irc and riot in that case (I still use it that way because of persistent history)
<simbergm>
jbjnr: it's open source and provides bridges to other services
<simbergm>
slack is backed by an evil corporation (not really, but it is tied to a company that will make us pay for history and may change their policies whenever)
<jbjnr>
hk disconnected
<jbjnr>
what is open source, this quassel thingy or the matrix riot
<heller>
both
<jbjnr>
If I can use a web browser instead of a separate client, then I'll try it
<heller>
you can
<jbjnr>
and see history even when I disconnect - that's why I gave up using the laptop and never check IRC - I'm using remote desktop to the windows machine for this
<jbjnr>
simbergm: I fixed dataflow annotation
<jbjnr>
I hope hkaiser sees this before he starts on it
<heller>
jbjnr: you can also have this
<jbjnr>
turned out to be easier than expected
<jbjnr>
next, continuations ...
<jbjnr>
heller: have what?
<heller>
jbjnr: history
<jbjnr>
with the quassel thingy? - that's what you use yes?
<jbjnr>
I'll need a server to run it on I guess - a machine that is always on?
<jbjnr>
and not running windows ....
<zao>
I pay for IRCCloud to interact with the few people still stuck on legacy platforms :P
<heller>
jbjnr: for quassel: yes. I stopped using quassel 5 hours ago and switched to riot.im (which is a client to matrix)
<zao>
I'm still annoyed with that it's harder than it should to have multiple identities for Matrix.
<jbjnr>
ok, so matrix is a server thingy and riot is a front end
<zao>
Not keen on mixing my work with hobby interests, particularly as one of those tends to include full names.
<jbjnr>
zao: don't be ashamed of being who you are!
<heller>
jbjnr: correct. and matrix.org runs on the cloud
<jbjnr>
anonymity is overrated and makes the internet rubbish
<zao>
jbjnr: Some people have opsec concerns.
<jbjnr>
meaning?
<zao>
There's a lot of weird people out there, particularly if you're not a dude.
<simbergm>
zao: firefox containers? :) not as elegant as having it built in but should do the job decently
<zao>
simbergm: Doesn't help across the device ecosystem with mobile devices and whatnot.
<simbergm>
yeah, fair point
<zao>
We looked at running Matrix inside the organization, and that would mean having one mandatory identity for internal stuff, I'm sure GDPR was involved somehow.
<zao>
I don't want to involve a work identity server with any private stuff I do, which is a bother.
<heller>
that's what you get for having a life!
<zao>
^^
<simbergm>
jbjnr: btw, it's not like we'll get rid of the hpx slack channel anyway
<simbergm>
we don't lose anything by having that around since it's on the cpplang slack
<jbjnr>
no, but I never check that because there's no traffic
<jbjnr>
if we shut down irc, everyone would go there instead
<jbjnr>
but I will try out matrix riot stuff
<jbjnr>
just to see
<simbergm>
yeah, give it a try, you might just like it
hkaiser has joined #ste||ar
<heller>
at least the IRC bridge is very nice. Since I don't have a IRC client at work, and the quassel web client just sucks, this is awesome
<heller>
"Our CDN was unable to reach our servers"
adi_ahj has quit [Quit: adi_ahj]
hkaiser has quit [Ping timeout: 260 seconds]
RostamLog has joined #ste||ar
adi_ahj has quit [Quit: adi_ahj]
<jbjnr>
hkaiser: fyi - I got annotation working with dataflow - at least mostly - I have a new compilation error on some complex use cases, but I am on the case, so don't spend time on it just yet
<hkaiser>
jbjnr: ok
nikunj has joined #ste||ar
adi_ahj has joined #ste||ar
<diehlpk_work>
simbergm, I will prepare the upfate for HPX 1.4 for Fedora today
<diehlpk_work>
If the fixed the bug, I will push it to the upcoming Fedora version and rawhide
adi_ahj has quit [Quit: adi_ahj]
primef has joined #ste||ar
<K-ballo>
"When it was first released" what's that? 2007?
adi_ahj has joined #ste||ar
<K-ballo>
simbergm: "I am employed" needs an option for the non-employed
<simbergm>
K-ballo :(
<simbergm>
I'll add that
<K-ballo>
I was thinking students, but yeah me too
<simbergm>
And yeah, roughly 2007 (AFAIK)
<simbergm>
diehlpk_work thanks for all the work in fedora!
<simbergm>
Any new developments that might turn into work?
adi_ahj has quit [Quit: adi_ahj]
adi_ahj has joined #ste||ar
primef has quit [Remote host closed the connection]
<K-ballo>
me? no, I'm focusing on my next trip
nikunj has quit [Ping timeout: 265 seconds]
<heller>
@ms:matrix.org: also "self-employed" or freelancing or somesuch
ritvik has joined #ste||ar
wash[m] has quit [Read error: Connection reset by peer]
wash[m] has joined #ste||ar
<diehlpk_work>
simbergm, Any idea what Rebecca could work on?
ritvik99 has joined #ste||ar
<diehlpk_work>
hkaiser, jbjnr, heller Do you want to have anything done on our documentation?
ritvik has quit [Ping timeout: 260 seconds]
<hkaiser>
simbergm: I'd add 'Not yet (but plan to)' as a possible answer to Q1
adi_ahj has quit [Quit: adi_ahj]
<heller>
diehlpk_work: lots ;)
<diehlpk_work>
heller, Be more specific?
<heller>
Maybe go through the tutorials that we've given and check how the docs can be improved?
<heller>
hpxrun.py should be documented eventually
<heller>
The components as well
<hkaiser>
heller: I think diehlpk_work asks because we have hired the GSoD student for a couple of hours a week and we want to decide what she should work on
<heller>
Also going through our modules and add more descriptions to their index
<heller>
The serialization module desperately waits for documentation
<heller>
Future, promise, all the standard stuff
<heller>
While it's nice to be able to point to external sources, there's always a bad touch to it
<diehlpk_work>
heller, Reming that she is not a cs student and can not read code
<diehlpk_work>
*Remind
<heller>
That's why I mentioned the std stuff and serialization
<heller>
There's existing material available which she can learn from
<diehlpk_work>
I need this link for the Fedora package
<diehlpk_work>
heller, Would you mind to send me and parsa one email with more details?
<diehlpk_work>
And provide her some example what she should document?
<heller>
diehlpk_work: also, a technical writer needs to be somewhat familiar with the product. Unfortunately, we're offering a rather low level product, so that's something she has to deal with.
<diehlpk_work>
Sure, but it will take time
<heller>
In my ideal world, she could even be our first line of support. That is, being knowledgeable enough to be able to point to the correct piece of documentation
<heller>
So better start sooner than later ;)
<heller>
Can you give me her email?
<heller>
diehlpk_work: ^^
<diehlpk_work>
heller, see pm
<hkaiser>
diehlpk_work: I uploaded the files, the link does work now
<diehlpk_work>
hkaiser, Thanks, so I can finish the Fedora update
weilewei has joined #ste||ar
hkaiser has quit [Quit: bye]
<weilewei>
simbergm based on your spack understanding, would it be possible for spack to build and run tests before installation? In other words, installation is allowed to happen only after all tests pass?
hkaiser has joined #ste||ar
ritvik99 has quit [Ping timeout: 265 seconds]
<jbjnr>
hkaiser: is sequenced_executor useful for anything? (is it deprecated or anything?)
<hkaiser>
shouldn't be deprecated, should work as advertized
<jbjnr>
what's the correct way of saying call function(a,b,c,d) if that signature is valid, but call function(a,b) if it isn't? (in the context of a function invocation forwarded via post:: or similar)
<hkaiser>
jbjnr: is_callable<F(A,B,C,D)>
<hkaiser>
enable_if on it
weilewei has quit [Remote host closed the connection]
<simbergm>
diehlpk_work: the release archive is intentionally not on stellar.cct.lsu.edu because it's annoying to upload it there when it's already on github, so please use the github archives if possible
<simbergm>
thanks everyone for the comments on the survey
<simbergm>
weilewei (in case you see this in the logs) it may be possible to coerce cmake into running the test target before the install target but I wouldn't recommend it, it's going to take a very long time and unfortunately we still have some tests that occasionally fail which won't be very helpful for you
<hkaiser>
simbergm: uploading the files is not an issue, but I agree that using the github files is a good option as well
<simbergm>
hkaiser: yeah, I suppose uploading them is not so bad, but every link that I have to change manually is extra work
<simbergm>
did you grab the tar.gz from github now and upload it on stellar.cct.lsu.edu?
<hkaiser>
right, understood
<hkaiser>
yes
<hkaiser>
the .zip as well
<simbergm>
yep, good
<hkaiser>
let's phase the stellar.cct.lsu.edu file storage out over time
<simbergm>
not that I mind doing the release :P I just don't like duplicating something that's already done automatically for me
<hkaiser>
right
<heller>
hkaiser: did you get my email?
<hkaiser>
heller: which one?
<heller>
The one to Rebecca
<hkaiser>
heller: yes, saw that, thanks
<heller>
Good, please amend it if I got anything wrong
<heller>
Or not appropriate
<hkaiser>
heller: sounds good, thanks again
jbjnr has quit [Read error: Connection reset by peer]