K-ballo changed the topic of #ste||ar to: STE||AR: Systems Technology, Emergent Parallelism, and Algorithm Research | stellar.cct.lsu.edu | HPX: A cure for performance impaired parallel applications | github.com/STEllAR-GROUP/hpx | Buildbot: http://rostam.cct.lsu.edu/ | Log: http://irclog.cct.lsu.edu/
bita has joined #ste||ar
<jaafar>
hkaiser: thanks for the tip. Doing a "make clean" replaced my problem with one that was much simpler to diagnose ;)
K-ballo has quit [Quit: K-ballo]
bita has quit [Ping timeout: 264 seconds]
bita has joined #ste||ar
<hkaiser>
jaafar: ok
<srinivasyadav227>
hkaiser: ok
nanmiao has quit [Quit: Connection closed]
hkaiser has quit [Quit: bye]
diehlpk_work has quit [Remote host closed the connection]
bita has quit [Ping timeout: 264 seconds]
<srinivasyadav227>
did cmake_minimum_required got changed to 3.17? I was having 3.16 yesterday, I was able to build, now it throws me an error
<srinivasyadav227>
anyway I installed cmake 3.20, there's no problem with cmake now
<srinivasyadav227>
I started to build but I think headers for **hpx::traits::is_threads_executor<Executor_>** are missing
<srinivasyadav227>
the error I was getting during build was ‘is_threads_executor’ is not a member of ‘hpx::traits’;
<srinivasyadav227>
I think HPX_HAVE_THREAD_EXECUTORS_COMPATIBILITY is not getting defined, and so is_threads_executor was not getting defined
<srinivasyadav227>
just realised we should pass -DHPX_WITH_THREAD_EXECUTORS_COMPATIBILITY=ON to cmake, but why?
<ms[m]>
srinivasyadav227: the datapar stuff that you're building was not tested with the different compatibility options (it wasn't tested at all)
<ms[m]>
since we're going to remove HPX_WITH_THREAD_EXECUTORS_COMPATIBILITY in before the next release you can just remove any references to things like is_threads_executor in the code that you're dealing with
<ms[m]>
and yes, we bumped the minimum cmake requirement to 3.17
vroni[m] has quit [Quit: Idle for 30+ days]
sauravjoshi23 has joined #ste||ar
<srinivasyadav227>
ms: ohh ok, still build failed at later stage, I will try to figure it out
<sauravjoshi23>
Hello, actually I have a doubt regarding the installation of hpx library on windows. In the manual it says Visual Studio 10, 12, and 13 compilers are supported. But I have Visual Studio 16. So is Visual Studio 16 compiler supported and if not which is the best one?
<ms[m]>
sauravjoshi23: that part is likely very out of date
<ms[m]>
I'm pretty sure whatever is the latest version of vs is supported
<rori>
yes 16 is used in the github actions
<ms[m]>
if that's 16 you're most likely ok
<ms[m]>
(not everyone is on windows, windows questions are best directed at hkaiser and k-ballo)
<gnikunj[m]>
unfortunately for/for_each results either future<void> or future<Iterator>. There's no way for me to return a type T, unless I pack the results into Iterator pack or something. Not sure if it would be optimal. This has the least amount of overheads (imo). I do know of a few nitpicks like generating Views of size n and then storing them separately. This should avoid allocation/deallocation on each operator= call.
<jedi18[m]>
If yes could you please explain to me why we are using in_out_result? Also I should replace the .first and .second in the tests with .in and .out?
<jedi18[m]>
Won't people using minmax_element expect to use .first and .second to like a pair to access the elements?
<K-ballo>
jedi18[m]: would you link to the code returning pair?
<K-ballo>
that one would use minmax_result, not in_out_result
<K-ballo>
sorry, minmax_element_result
<K-ballo>
(same underlying thing)
<jedi18[m]>
Does minmax_element_result already exist or will I have to do define it? Or do you mean I have to do `using minmax_element_result = something`
<K-ballo>
in_out_result is for algorithms that return an input iterator and an output iterator
<K-ballo>
I don't know whether it already exists in HPX
<K-ballo>
IIRC all those algorithm results are in the same file? what was that
jejune has quit [Quit: "What are you trying to say? That I can dodge bullets?" "No Neo, what I'm trying to say, is that when you are ready.....you won't have to"]
nanmiao has joined #ste||ar
nanmiao has quit [Client Quit]
nanmiao has joined #ste||ar
<hkaiser>
rori: hey
<hkaiser>
rori: I'd like to understand where you have used #if !defined(HPX_COMPUTE_DEVICE_CODE)/#endif
hkaiser has quit [Read error: Connection reset by peer]
hkaiser has joined #ste||ar
<gonidelis[m]>
jedi18: K-ballo sorry i was out and about for a while
<gonidelis[m]>
K-ballo: is right
<gonidelis[m]>
min_max_result is a strange case. Do we have `hpx::parallel::util::min_max_result` implemented yet? jejune
<gonidelis[m]>
jedi18: ^^
<gonidelis[m]>
(sorry for the wrong tag)
<gonidelis[m]>
it's the first time that I encounter a pair/tuple not getting converted to an in_out_result/in_in_out_result, i will have to fix the guide
<gnikunj[m]>
hkaiser: yt?
<hkaiser>
sec
<gnikunj[m]>
ok
<hkaiser>
gnikunj[m]: now
<hkaiser>
whats up?
<gnikunj[m]>
am done implementing replicate version as well ;)
<hkaiser>
didn't replicate take a majority voter or something?
<gnikunj[m]>
hkaiser: am thinking to now look into kokkos execution space implementation. If things are well laid out, I might have an execution space implementation done before the next call too ;)
<gnikunj[m]>
<hkaiser "didn't replicate take a majority"> there was a variant called vote
<hkaiser>
gnikunj[m]: hold your horses ;-)
<gnikunj[m]>
the one I have implemented here is replicate_validate
<gnikunj[m]>
there was replicate_vote_validate too which validated all results and then passed it to voting function
<gnikunj[m]>
I can have all of those variants implemented if you want me to
<gnikunj[m]>
it's majorly copy paste for me
<hkaiser>
so you overwrite the result by each replica, wouldn't that create a race?
<hkaiser>
I think your exxec bool would have to be an atomic
<gnikunj[m]>
yeah right. I need to make that atomic. Otherwise exec_result won't have races coz it will only be assigned once
<gnikunj[m]>
also std::move(res) will be more efficient there
<hkaiser>
no, it may get assigned more than once
<hkaiser>
nod, correct rt move
<hkaiser>
I meant: std::move would be correct
<gnikunj[m]>
how's that? exec_bool default constructed is false. Once it's made true, program flow won't enter if statement
<gnikunj[m]>
so exec_result should be assigned once
<gnikunj[m]>
I should add a lock there
<hkaiser>
sure, but there is a gap between the test !exec_bool[0] and the assignment
<hkaiser>
so more than one thread might see exec_bool[0] == false
<hkaiser>
but a lock would do
<gnikunj[m]>
right. Well we can separate into 2 ifs or use a lock
<hkaiser>
nod
<gnikunj[m]>
if we separate it into 2 ifs, we need atomic bool
<hkaiser>
or a lock
<gnikunj[m]>
is there an hpx::mutex?
<gnikunj[m]>
and an hpx::lock_guard
<hkaiser>
use std::lock_guard, and in device code you want to use std::mutex
<gnikunj[m]>
sounds good
<hkaiser>
but a simple atomic_bool would do
<gnikunj[m]>
right.
<gnikunj[m]>
atomic bool will be much simpler
<hkaiser>
if (!exec_bool[0].exchange(true)) { exec_result[0] = res; )
<gnikunj[m]>
right
<gnikunj[m]>
no wait, that won't work in cases where the predicate returned false. We need 2 ifs.
<gnikunj[m]>
let me correct the mistakes by tomorrow. Shouldn't be difficult.
<hkaiser>
gnikunj[m]: yes, I just showed the inner if
nanmiao has quit [Quit: Connection closed]
bita has quit [Ping timeout: 264 seconds]
nanmiao has joined #ste||ar
hkaiser has quit [Read error: Connection reset by peer]