<hkaiser>
gnikunj[m]: normal rules of function overloading apply
<gnikunj[m]>
Can I use the cpo your wrote for my distributed api's in a different namespace?
<gnikunj[m]>
s/your/you
<hkaiser>
sure, as long as you can make the compiler find it
<hkaiser>
your tag_invoke() that is
<gnikunj[m]>
aah got it! I think ik what to do
<hkaiser>
gnikunj[m]: the compiler can find the tag_invoke either if it's in the same namespace as the CPO or through ADL
<hkaiser>
well, the first case is technically ADL as well
<gnikunj[m]>
yes, understood. I need to do a tag_invoke(<actual_namespace>::<object>)
<gnikunj[m]>
this should let the compiler know where to find the associated object
<gnikunj[m]>
hkaiser: I'm having trouble defining tag_invoke in hpx::resiliency::distributed::experimental using cpo defined in hpx::resiliency::experimental
<gnikunj[m]>
Since they're not in the same namespace, compiler will try to find it with ADL
<gnikunj[m]>
but we don't have any arguments in the namespace hpx::resiliency::distributed::experimental
<hkaiser>
gnikunj[m]: right
<gnikunj[m]>
am I missing something or do we need 2 separate CPOs in this case?
<hkaiser>
two CPOs means two APIs
<gnikunj[m]>
yes, but do we really want that?
<hkaiser>
I'd prefer not to
<gnikunj[m]>
yes, so do I
<gnikunj[m]>
the only way I can think of is to get rid of the intermediary distributed namespace
<hkaiser>
or have a using declaration
<gnikunj[m]>
The function types for local and distributed codes are completely different, so we should not run into trouble as well
<gnikunj[m]>
yes but then we risk exposing the whole namespace
<gnikunj[m]>
I mean I can certainly do a using namespace hpx::resiliency::distributed::experimental inside cpo's namespace. But then we're essentially keeping the APIs on the same namespace
<gnikunj[m]>
I don't think if it will be any different than to get rid of the intermediary namespace
<hkaiser>
gnikunj[m]: true
<gnikunj[m]>
hkaiser: what would you suggest?
<gnikunj[m]>
I've been scratching my head over on this for the past 30min
<hkaiser>
you only need the tag_invoke in the same namespace
<gnikunj[m]>
yes, that's what I essentially need
<gnikunj[m]>
but having them on the same namespace will not exactly split our local and distributed implementations
<gnikunj[m]>
I can certainly send them off to different files where they can live happily ever after. But same namespace can be daunting (even though they share no similarities whatsoever)
<hkaiser>
gnikunj[m]: the implementation can stay in a separate ns
<hkaiser>
just the tag_invoke has to be where the cpo lives
<hkaiser>
the tag_invoke just dispatches to wherever the implementation is
<gnikunj[m]>
why isn't my code compiling then :/
<gnikunj[m]>
I use this for distributed apis: tag_invoke(hpx::resiliency::experimental::async_replay_t, const std::vector<hpx::naming::id_type>& ids,
<gnikunj[m]>
but still the point being, ADL checks will limit it to namespace of tag_invoke, and arguments provided
<gnikunj[m]>
unless we have an argument that belongs to the namespace of our implementation, we can't get tag_invoke to work correctly
bita_ has quit [Read error: Connection reset by peer]
bita_ has joined #ste||ar
Yorlik has joined #ste||ar
hkaiser has quit [Quit: bye]
kale[m] has quit [Ping timeout: 240 seconds]
kale[m] has joined #ste||ar
jaafar has quit [Quit: Konversation terminated!]
Yorlik has quit [Ping timeout: 240 seconds]
<jbjnr>
Anyone awake? Question. I want to do an if template argument has function X, then call it, else do this. Do we have some boilerplate code anywhere that implements this (like is invokable or something)
<jbjnr>
I don't want to create a helper struct if we already have one
<gnikunj[m]>
There's is hpx::traits::is_invocable
<gnikunj[m]>
freenode_biddisco[m] There's also hpx::util::detail::invoke_deferred that gives you the return type
bita_ has quit [Ping timeout: 260 seconds]
<jbjnr>
Those are ok, but I think what I want is in <hpx/concepts/has_member_xxx.hpp>
<rori>
Please let me know if something is wrong, I'll send an email on hpx-users/hpx-devel probably later today :)
hkaiser has joined #ste||ar
Yorlik has joined #ste||ar
<hkaiser>
rori: thanks a lot for workin gon the release candidate!
<hkaiser>
is there anything I can do to help?
<rori>
you're welcome, just reworking the release notes and will open a draft later today :)
<tiagofg[m]>
Hi everyone,
<tiagofg[m]>
In HPX it is possible to wrap two member functions with the same name(with different parameters) in two actions?
<tiagofg[m]>
Or each member function must have a unique name to create an action from it?
<weilewei>
hkaiser to prepare libcds tag for HPX release, shall I merge the hpx-thread branch to master branch in STE||AR-group/libcds repo, and then make a tag
<hkaiser>
weilewei: yes, that would be the first step
<hkaiser>
also you'd need to create a PR for HPX changing the default libcds tag to the one you created
<hkaiser>
rori: will include this into the next RC
<weilewei>
hkaiser ok, I will get it ready today I think
<rori>
yes btw what do we do about hpxMP, do we deprecate it for this release or not yet?
<hkaiser>
tiagofg[m]: that is possible, but requires more handywork, also the actions will have different names
<hkaiser>
rori: good question
<hkaiser>
I think we should deprecate it in the build system, I don't have anybody working on it right now
<rori>
ok will add a warning then :)
<hkaiser>
thanks
<tiagofg[m]>
hkaiser: Humm ok, I will give different names then, to make it simple, thank you!
jaafar has joined #ste||ar
diehlpk__ has joined #ste||ar
nanmiao11 has joined #ste||ar
<gnikunj[m]>
hkaiser: yt?
kale[m] has quit [Ping timeout: 260 seconds]
kale[m] has joined #ste||ar
karame_ has joined #ste||ar
<hkaiser>
gnikunj[m]: here now
<gnikunj[m]>
hkaiser: you said you implemented async_replay executors. From the code, it looks like you've extended the APIs to allow executors as an argument.
<hkaiser>
gnikunj[m]: ok?
<gnikunj[m]>
isn't async_replay executor be supposed to used as an alternative to the current executors?
<hkaiser>
it does
<gnikunj[m]>
how do I use one in a parallel_for then?
<hkaiser>
not quite what you need, but should get you started
<gnikunj[m]>
thanks!
<hkaiser>
you don't need to implment all of the API functions, async_execute is sufficient, but you might want to implement bulk_async_execute as well to be efficient
<gnikunj[m]>
I will look into it
<gnikunj[m]>
having it will be nice
<hkaiser>
the APIs that are not implemented are emulated using the above two
<hkaiser>
rori: yt?
<rori>
yes
<hkaiser>
I would like to introduce options allowing to enable deprecation warnings depending on a version
<hkaiser>
something like HPX_WITH_DEPRECATION_WARNINGS_V1_6, which would enable a warning starting V1.6
<hkaiser>
this would help managing the warnings easily in one place
<hkaiser>
and helps documenting code
<hkaiser>
not sure how far you have already gone with that
weilewei has quit [Remote host closed the connection]
<rori>
yep for now it's only stated in the comments `# deprecated in HPX <ver>` but I can do that yes
<rori>
I'm currently doing it
<hkaiser>
ok, let me create a PR for this now
<rori>
ok sure
<ms[m]>
hkaiser: question about for_each return types if you have a moment
<ms[m]>
*when you have a moment ;)
<hkaiser>
rori: see #4862
<hkaiser>
ms[m]: sure
<rori>
👍️
<hkaiser>
ms[m]: any time
<ms[m]>
hkaiser: thanks; so the return types are nicely inconsistent for the various std::(ranges::)for_each(_n) overloads
<hkaiser>
yes
<hkaiser>
which is a major pita, indeed
bita_ has joined #ste||ar
<ms[m]>
would you prefer following the spec to the letter and do the different return types? or just return e.g. something with F, Iter (i.e. in_fun_result-ish)? for all variations? or something in between?
<hkaiser>
ms[m]: for the other algorithms I touched recently I followed the spec
<hkaiser>
I have left the current implementations/APIs untouched, though
<K-ballo>
there's no point in returning F for parallel, we copy it
<hkaiser>
(mostly)
<hkaiser>
K-ballo: yes, that's what I wanted to say as well
<hkaiser>
some things we can't support for the parallel version, though
<K-ballo>
the discrepancy between std::for_each for parallel and regular algorithms is by design
<ms[m]>
hmm, ok, what would in that case return from a parallel hpx::ranges::for_each (which is afaict not in the spec)?
<hkaiser>
just the iterator
<K-ballo>
in_result ?
<hkaiser>
yes
<ms[m]>
ok, right, just drop the F
diehlpk__ has quit [Ping timeout: 260 seconds]
<hkaiser>
we can't use ranges::for_each_result, however, as that has F in it
<K-ballo>
there's a specific for_each_result type? ...
<hkaiser>
yes
<ms[m]>
looks to me like this would be clearest to achieve through having a few additional versions of struct for_each, no? or would you adapt the existing struct for_each?
<ms[m]>
using for_each_result = in_fun_result<I, F>;
<hkaiser>
which is just a using
weilewei has joined #ste||ar
<K-ballo>
what were they thinking..?
<hkaiser>
ms[m]: not sure what you mean
<weilewei>
hkaiser what should our libcds tag sound like? v1.0.0-hpx?
<ms[m]>
then, what is returning F good for in the non-task version? returning last for the non-ranges and ranges execution policy versions would I suppose work relatively well
<ms[m]>
if the standard says void and we return an iterator one can always ignore the return value
<hkaiser>
ms[m]: true
<hkaiser>
returning F allows to use it as a ccumulator object
<K-ballo>
non-range algorithms don't return the iterator, because it's the same already given as an argument
<K-ballo>
range algorithms return the iterator, because they only got a sentinel to look for, so the iterator at that sentinel is new information that has been computed
<ms[m]>
ah... for the fancy ranges chaining stuff?
<diehlpk_work>
HPX 1.4.1 and 1.5.1 stopped to work with Cmake on the upcoming Fedora release
<rori>
hkaiser: approved now, feel free to merge
diehlpk__ has quit [Ping timeout: 260 seconds]
<hkaiser>
rori: thanks!
<hkaiser>
diehlpk_work: no idea why make didn't find a makefile
<hkaiser>
that looks like a setup problem
<ms[m]>
diehlpk_work: could it be using ninja for some reason by default? does the fedora build system give you access to all the files that were generated?
<K-ballo>
wrong path, or does the script ever cd into x86_64-redhat-linux-gnu ?
kale[m] has joined #ste||ar
karame_ has quit [Remote host closed the connection]
diehlpk__ has joined #ste||ar
karame_ has joined #ste||ar
<diehlpk_work>
K-ballo, Good catch
<diehlpk_work>
Fedora exposes -B x86_64-redhat-linux-gnu as some new argument
<diehlpk_work>
I hate if one adds additional cmake flags as some macro
karame_ has quit [Remote host closed the connection]
<ms[m]>
hkaiser: sorry, didn't mean to drag out the deprecation pr... I just commented with a potential suggestion, but feel free to ignore as you see fit
<hkaiser>
ms[m]: I like your idea, I'll see what I can do
<ms[m]>
👍️
<diehlpk_work>
ms[m], !.6.0 complies on Fedora for x86
<diehlpk_work>
1.5.0
<ms[m]>
diehlpk_work: nice, thanks!
<diehlpk_work>
Just need to fix the new cmake stuff
<diehlpk_work>
1.4.1 has the same issues
<diehlpk_work>
Somethign is fishy in the Fedora build system, the arch varibale on i686 is i386 and therefore the build does not work
<ms[m]>
hkaiser: btw, should in_out_result (and in_fun_result, etc etc) maybe be in hpx::ranges? and the helpers get_x_result in hpx::ranges::detail?
weilewei has quit [Remote host closed the connection]
nanmiao11 has quit [Remote host closed the connection]
<ms[m]>
hkaiser: you should btw also have received an email about an account on daint now thanks to jbjnr
<ms[m]>
if someone else feels like they could use an account on daint for debugging builds, let one of the cscs people know (K-ballo, freenode_gonidelis[m]?)
* K-ballo
is not a cscs people
<ms[m]>
those parentheses may have been a bit confusing, K-ballo is someone who may have a need for a cscs account (although he may already have one)
nanmiao11 has joined #ste||ar
weilewei has joined #ste||ar
<hkaiser>
ms[m]: yes, I got the account now - thanks!
<ms[m]>
👍️ (the corresponding types are in std:ranges after all)
<ms[m]>
thank jbjnr!
<hkaiser>
ms[m]: yes, true
<hkaiser>
my oversight
<gonidelis[m]>
hkaiser: yt?
<hkaiser>
here
<gonidelis[m]>
"Adapt HPX range-based parallel algorithms in order to expose a range based interface according to latest Ranges-TS" this is the title that we retain thus far in the project. Things have changed a lot in the way though (I mean a lot! :p(
<gonidelis[m]>
)*
<gonidelis[m]>
Do you have any suggestions on how could I reform it?
<hkaiser>
gonidelis[m]: I created another ticket for this
<hkaiser>
#4822
<gonidelis[m]>
oh ok So that's more of a general concept that encapsulated our prior goal, right?
<gonidelis[m]>
encapsulates*
<gonidelis[m]>
(I just changed the title btw ;) )
diehlpk__ has quit [Ping timeout: 244 seconds]
diehlpk has joined #ste||ar
diehlpk has quit [Changing host]
diehlpk has joined #ste||ar
diehlpk_work has quit [Ping timeout: 260 seconds]
<gnikunj[m]>
hkaiser: is there a way to stop a running task in HPX? I don't see a future having an option to sleep/destroy the task.
<hkaiser>
gnikunj[m]: not from the outside
<gnikunj[m]>
what do you mean "not" from the outside?
<hkaiser>
the task has to break itself
<gnikunj[m]>
can we not tell the scheduler to dequeu the task?
<gnikunj[m]>
or something similar
<hkaiser>
no
<hkaiser>
you have to tell the task to stop running
<gnikunj[m]>
how do tell the task to stop running?
<hkaiser>
HPX is cooperative, not preemtive
<hkaiser>
how? that's your problem ;-)
<hkaiser>
set a flag and check from inside the task itself
<gnikunj[m]>
I thought HPX was preemtive
<hkaiser>
no it isn't
<gnikunj[m]>
I see. Essentially I wanted to trigger tasks to destroy for async replicate if one of the task returned with a correct rersult
<gnikunj[m]>
this would save execution time
<hkaiser>
ok
<gnikunj[m]>
looks like it can't be done :/
<hkaiser>
gnikunj[m]: you can do it, but it requires additional code
<gnikunj[m]>
hkaiser: I'm not afraid of adding more code ;-)
<gnikunj[m]>
unless its highly complex metaprogamming stuff. Then count me out :P
<hkaiser>
you can use the c++ 20 stop_token, alternatively, hpx threads can be told to stop runnning, but both methods require for the thread to actively do something to check whther it should stop
<hkaiser>
hpx threads also automatically check whther they should stop running (second method) on certain synchronization points, like suspension)
bita__ has joined #ste||ar
bita_ has quit [Ping timeout: 256 seconds]
<gnikunj[m]>
hkaiser: how do I trigger threads to suspend if I can't get a handle of thread from an async?
<hkaiser>
that's your problem ;-)
<gnikunj[m]>
do you mean to replace the async invocations with hpx::thread invocations?
<gnikunj[m]>
<hkaiser "that's your problem ;-)"> :D
<hkaiser>
but since you control the tasks that are being launched using async you can do it
<gnikunj[m]>
how so?
<gnikunj[m]>
any overload of async I'm missing out on?
<hkaiser>
communicate back to the launch site the thread-id as the first thing inside the launched task before running the user code
<gnikunj[m]>
since I invoke the function and return after its execution, it won't be helpful as suspending the thread post return isn't what I'm looking for.
<hkaiser>
gnikunj[m]: does this mean that the cancelable action example is broken as well?
<gnikunj[m]>
I haven't tried running it
<gnikunj[m]>
wait
<gnikunj[m]>
hkaiser: they run just fine
<gnikunj[m]>
mostly likely because you suspend the thread and then yield it
<gnikunj[m]>
so destructor won't have problems destroying the thread
<hkaiser>
gnikunj[m]: your code is broken
<hkaiser>
if you add a t.join() all will be fine
<gnikunj[m]>
aah right! my fault
<hkaiser>
also, the exception will be thrown inside the universal answer
<gnikunj[m]>
it works nicely now
<hkaiser>
not inside the main() function
<gnikunj[m]>
I see
<hkaiser>
t.interrupt instructs the thread 't' to interrupt itself at the next convenient synchronization point
<gnikunj[m]>
hkaiser: it works! I can use similar tactic to interrupt the running threads for async_replicate to stop them from executing if we already get a valid answer.
<hkaiser>
gnikunj[m]: nice
<gnikunj[m]>
essentially invoke an async distributed. Set up a communication channel. If the thread runs to completion, send all other threads a signal to interrupt their execution.
<hkaiser>
yes
<hkaiser>
what channel do you use?
<hkaiser>
stop_token?
<gnikunj[m]>
I haven't implemented yet. But I think I'll use stop_token, yes.
<gnikunj[m]>
or perhaps a standard communication channel within hpx. Attach a future to channel.get and set false on the channel once I have a valid return. In this case the attached continuation will interrupt the executing thread. If however I'm able to get the result and the channel is not set. I return the result and ask the invoker to set the channel for every other invocation.
<gnikunj[m]>
hkaiser: I think its better understood in an implementation rather than explaining in text