hkaiser changed the topic of #ste||ar to: STE||AR: Systems Technology, Emergent Parallelism, and Algorithm Research | stellar.cct.lsu.edu | HPX: A cure for performance impaired parallel applications | github.com/STEllAR-GROUP/hpx | Buildbot: http://rostam.cct.lsu.edu/ | Log: http://irclog.cct.lsu.edu/ | GSoC: https://github.com/STEllAR-GROUP/hpx/wiki/Google-Summer-of-Code-%28GSoC%29-2020
<Yorlik> And this fixed it - lol:
<Yorlik> task_data* task_data_p = reinterpret_cast<task_data*>( hpx::this_thread::get_thread_data( ) );
<Yorlik> while ( task_data_p == nullptr ) {
<Yorlik> task_data_p = reinterpret_cast<task_data*>( hpx::this_thread::get_thread_data( ) );
<Yorlik> }
<Yorlik> lua = task_data_p->task_engine.get();
<Yorlik> That while is ... erm ... lol? But it fixed the nullptr issue
<Yorlik> So the updater start working, before the on_start lambda has finished
<Yorlik> hkaiser - I think this is an issue: The lambda should be finished before the tasks start. AFter all I'm using it to actually limit task creation. Not sure if I'm doing something wrong here.
<Yorlik> Might be setting the task data in a wrong way
<Yorlik> So that was stupid - it never leaves the loop.
<Yorlik> But it means I am setting my data wrong
<Yorlik> hkaiser: Is this supposed how to set it in the on_start lambda? (unsafe method)
<Yorlik> ----
<Yorlik> hpx::this_thread::set_thread_data( reinterpret_cast<size_t>( task_data_p ) );
<Yorlik> task_data_p->task_engine = get_luaengine( );
<Yorlik> auto task_data_p = new task_data {};
<Yorlik> ----
<Yorlik> Because task_data is constantly being a nullptr:
<Yorlik> --------
<Yorlik> lua_engine* lua = task_data_p->task_engine.get( );
<Yorlik> task_data* task_data_p = reinterpret_cast<task_data*>( hpx::this_thread::get_thread_data( ) );
<Yorlik> --------
<hkaiser> sec
<hkaiser> Yorlik: yes, that's how it is supposed to be used
<Yorlik> Using it inside is always giving me a nullptr
<Yorlik> task_data_p = nullptr
<hkaiser> you set it in on_start?
<Yorlik> Yes
<Yorlik> With the code showed
<hkaiser> and where do you call get?
<hkaiser> you sure it's the same thread that called on_start?
<Yorlik> inside update()
<Yorlik> I am just using hpx::this thread - might be wrong
<hkaiser> print the id to check
<Yorlik> Doing that
<hkaiser> hpx::this_thread::get_id()
<Yorlik> Yup - on it
<Yorlik> The lambda obviously isn't been called for whatever reason
<Yorlik> I might have done something bad to the executor, I'm afraid ... checking
<Yorlik> hkaiser: It is executing the Executor Constructor 4 times (one per thread I guess), but never calling any of the interface functions, especially not bulk_async_execute
<hkaiser> so you're constructing an executor for each core, that's fine I guess
<hkaiser> also, you use for_loop, right?
<Yorlik> Yes
<Yorlik> I wonder if I have a silent fail somewhere in the call chain
<hkaiser> no idea
<hkaiser> for_loop definitely uses bulk_async_execute
<Yorlik> Thats the loop:
<Yorlik> hpx::parallel::for_loop(
<Yorlik> .with( auto_chunk_size( autochunker_target_us * 1us ) ),
<Yorlik> .on( exec )
<Yorlik> hpx::parallel::execution::par
<Yorlik> 0,
<Yorlik> m_e_type::endindex.load( ),
<Yorlik> &update_entity<I>
<Yorlik> );
<Yorlik> Been using this for ages
<hkaiser> ok
<Yorlik> I must have done something today
<Yorlik> It used to work. But I didn't touch the executor
<hkaiser> the thing is that auto_chunk_size() might run (part of) the iterations directly, which circumvents using the on_start/on_end
<hkaiser> sorry, I forgot about that
<Yorlik> Ow ..
<Yorlik> Lemme switch that off a moment
<hkaiser> back to the drawing board, then
<Yorlik> You want to rework the executor?
<Yorlik> With static chunk size it works like a charm. I guess it's development :)
<hkaiser> yah
<hkaiser> sorry for all the trouble
<Yorlik> Naw - it's fun actually.
<Yorlik> And I'm learning a lot.
<Yorlik> And after all - you made this very nice executor after I needed it - so - that's simply great !
<Yorlik> If I make the static chunk size larger than the loop - will it run single threaded or stull chop it at least into the number of worker threads?
<hkaiser> whatever the static chunker decides will happen, look at the code
<Yorlik> OK
<Yorlik> I guess it's get_chunk_size - it gives sensible defaults when o is passed
<Yorlik> Updating 100.000 objects with 100.000 calls into Lua in ~ 1.5 - 2.0 seconds
<Yorlik> Not too bad as a start
<Yorlik> Can't wait to run that on my threadripper next week :D
<Yorlik> 12 cores :)
<hkaiser> Yorlik: we need to make the autochunker runn the function using the executor instead of running it directly
<hkaiser> that's a bit involved (requires API changes) but not difficult
<Yorlik> Will it cost performance?
<hkaiser> but this would fix an obvious oversight in the initial design
<hkaiser> no perf impact, I think
<Yorlik> Allright.
<Yorlik> I can wait for it.
<Yorlik> I'll just use the static chunker with 0 as param for that time
<hkaiser> autochunker would use executor::sync_execute to run the function
<Yorlik> It gives some nice defauding on cere count
<hkaiser> k
<Yorlik> I think my keystrokes are not coming all through - I need to visually check
<hkaiser> Yorlik: static chunker with zero as it's argument is the default, I think - no need to use a chunker at all in this case
<Yorlik> defauding = defaults depending on
<Yorlik> OK
<Yorlik> So just not call to .with()
<hkaiser> I might create the PR for this later today or tomorrow
<Yorlik> Great. This is fun to work like this. I feel supported :)
<Yorlik> BTW - with the removal of another lock by using the task data the exceptions in Debug Mode are gone (for now)
<Yorlik> Allright - time to sleep - it's 4.00 A.M. here ... Good Night!
hkaiser has quit [Quit: bye]
<jbjnr> what's the oldest version of boost that we support in hpx
<jbjnr> anyone remember?
<heller1> should be documented
<jbjnr> I couldn't be arsed looking it up, thought someone might rememebr it
<jbjnr> 1.61 or newer
<rori> it is set to 1.61
<jbjnr> turns out to be easy to find
<jbjnr> thANKS RORI
<heller1> 1.67.0 or newer
<heller1> oh, recommended ;)
<jbjnr> I'm setting up spack on rostam to build all boost versions, with all compilers etc etc
<heller1> cool
<zao> (eeew)
<heller1> flamewars incoming?
<zao> Nah, just contractually obligated as an EasyBuild maintainer to react :P
<heller1> ;)
jaafar has quit [Ping timeout: 244 seconds]
jaafar has joined #ste||ar
<jbjnr> zao: from my limited experience, spack seems to be a bit better than easybuild for us users. I'm able to do more with less errors/pain, though I'm still struggling a bit with some of the module related issues.
<zao> jbjnr: Yeah, it's friendlier toward individual developers/users, while EB is more about setting up a whole cluster site up-front.
<zao> Choices are good.
<jbjnr> lol
mcopik has joined #ste||ar
mcopik has quit [Client Quit]
<jbjnr> I thought we were requiring hwloc 2.0. Can we also bump our requirements up for any other tools like llvm from 3.8 to 10.0 now covers a very large range, and gcc from 4.9 to 9.3 is a lot
<jbjnr> I will get pack to autmotaiclly install all version of all of these for our rostam build matrix
<jbjnr> (I made up the jemalloc and perftools versions cos we don't require thm)
<heller1> do we really support clang 3.8 and gcc 4.9 still?
<jbjnr> ^^spack, not pack
<jbjnr> according to our docs we do
<jbjnr> I though we were on gcc 6
<jbjnr> I'd like to reduce the list size a bit
<heller1> yeah, that's quite a number of configurations
<ms[m]> jbjnr: it's minimum gcc 7 on master now
<ms[m]> don't remember which clang
<ms[m]> but that's a massive list in any case
<ms[m]> the only wish I have is that we at least have some set of configurations that we always test (you can have random configurations on top of that if you'd like
<jbjnr> ok gcc 7 it is
<jbjnr> Need to reduce the clang versions. I'll pick 7.0 as a lowest test value unless told otherwise
<jbjnr> can we insist on hwloc 2 as well?
<jbjnr> and rais the boost number?
<jbjnr> * and raise the boost number?
<jbjnr> maybe I should ignore the versions superceded by a point release of everything too
<jbjnr> actually spack is doing that for me already I thinkk
<ms[m]> it's clang 5 at the moment
<ms[m]> boost will probably be minimum 1.64 at the next release (10 versions)
<ms[m]> we might be able to bump clang as well
<ms[m]> one version per major version is enough for compilers (there are some regressions within the minor versions but fixes for those are case-by-case anyway
<ms[m]> )
<jbjnr> bit, but not quite so extreme
<jbjnr> (I set hwloc to 2.0 minimum)
<heller1> jbjnr: I think we should only test > recommended
<jbjnr> good idea, then we can bump hwloc recommended to 2.0
<jbjnr> and bump clang up a bit for recommended maybe ...
<heller1> yeah, maybe the prior to the latest released version?
nikunj97 has joined #ste||ar
<ms[m]> I removed the recommended versions from the latest docs because either we support a version or we don't, and most of the time there are no practical differences for the user
<ms[m]> minimum and latest version should at least be tested
<zao> ms[m]: Sorry if I was a bit of a butt the other day, everything was bad :D
hkaiser has joined #ste||ar
<ms[m]> zao: lol
<ms[m]> you weren't and even if you had been, hpx can be a bit of a butt so it would only be fair ;)
<ms[m]> do feel free to bug us about all the issues you had if they're still relevant
<ms[m]> and that reminds me that I should fix that message about ignoring build types...
<zao> I'll probably give HPX a few tries, made some progress on it and found how executors were inherited or not.
<zao> Is there any way to `maybe_get()` a future? Give it a bit of a push and if it resolves fully get the thing, but if it doesn't, continue with the task you were on?
<zao> Right now I hpx::this_thread::yield(), but I don't necessarily want to give up my whole time slice forever.
<zao> As little as possible, if it makes sense, as my normal loop work on it is time-sensitive.
<hkaiser> zao: future::is_ready() ?
<hkaiser> or future::ready() for that matter, don't remember
<hkaiser> 'giving it push' is not possible :/
<zao> I was trying to use ready(), but that didn't help it make progress.
<zao> (on a single-thread executor, with the future on the same executor)
<zao> If I was using Asio, I'd do a bunch of `io_context::poll_one()` to process any eventual pending main thread work.
mcopik has joined #ste||ar
<hkaiser> we don't have a way of doing this except yielding
<hkaiser> but then you could simply call get() on the future and be done with it
mcopik has quit [Client Quit]
<jbjnr> I just did a test on rostam and my simple boost examples worked if I installed boost with c++14 and compiled my tests with either c++14 or c++17 - do we really believe you can mix c++14 and c++17 together - it has never worked for me before, but maybe my env was messed yp
<jbjnr> * I just did a test on rostam and my simple boost examples worked if I installed boost with c++14 and compiled my tests with either c++14 or c++17 - do we really believe you can mix c++14 and c++17 together - it has never worked for me before, but maybe my env was messed up
<jbjnr> if I can just build one flavour of boost (per compiler) instead of 2 - it is a big saving
<hkaiser> jbjnr: gcc claims to be ABI compatible, no idea how much this is actually true
<jbjnr> I guess I can try installing boost with cxx14 and just see what gives ....
<jbjnr> I'm now dropping point releases if there is a better one in the same minor build, so 9.0.1 would replace 9.0.0
<jbjnr> etc etc
<hkaiser> jbjnr: can we do this with docker containers?
<jbjnr> I'm doing it with spack environments - similar
<hkaiser> what does Alireza say?
<hkaiser> ok
<jbjnr> I did not consult him. I only ask when I need help
<jbjnr> I'm learning spack and trying to integrate pycicle with it
<hkaiser> k
weilewei has joined #ste||ar
<jbjnr> :)
<zao> :P
<zao> I should see if I can get my gosh-darn Matrix server to federate some day.
rtohid has joined #ste||ar
<hkaiser> jbjnr: btw, from the list of combinations you plan to run on rostam - I think we shouldn't run more than 10 different variations, otherwise we will overwhelm that machine
<hkaiser> jbjnr: also we should avoid duplicating configurations with daint
<jbjnr> pycicle can pick random combinations of libs and flags
<hkaiser> how would that be reprocible?
<jbjnr> I'm not planning on building all of them, all of the time
<jbjnr> no way!
<jbjnr> it generates a string of flags and libs that can be pasted back into the launch command to reproduce a failed build
<ms[m]> jbjnr: we do want a fixed subset though that's always run
<ms[m]> also gsoc meeting with weilewei in 35 minutes, right?
<jbjnr> That's not a problem - it's all the stuff that fails constantly the rest of the time that anoys me
<weilewei> ms[m] Right
<jbjnr> the meeting was arranged to talk about dca, we'll just go over gsoc at the end because we'realready online
<ms[m]> 👍️ (feel free to ping me if you're done earlier with the dca stuff)
<weilewei> ms[m] will do!
kale_ has joined #ste||ar
<diehlpk_work_> Our application for Season of Docs was not successful :(
gonidelis has joined #ste||ar
kale_ has quit [Ping timeout: 260 seconds]
nan11 has joined #ste||ar
Nikunj__ has joined #ste||ar
bita_ has joined #ste||ar
kale_ has joined #ste||ar
nikunj97 has quit [Ping timeout: 272 seconds]
karame_ has joined #ste||ar
<weilewei> jbjnr ms[m] hkaiser I just sent a GSoC weekly meeting invite if that works for your schedule from next week to August 24 (the end of GSoC)
<ms[m]> weilewei: thanks!
<hkaiser> weilewei: I can't make it at that time, at least until mid June
<weilewei> hkaiser or would you please suggest an alternative time?
<hkaiser> weilewei: pls coordinate with Katie
nikunj97 has joined #ste||ar
<weilewei> hkaiser ok
<hkaiser> I could meet 30 minutes earlier, i.e. Mondays 8:30am
<weilewei> jbjnr ms[m] does 30 mins earlier work for you? that will be 3:30pm in your time...
<jbjnr> fine for me
Nikunj__ has quit [Ping timeout: 252 seconds]
<ms[m]> weilewei: fine for me as well
<Yorlik> hkaiser: YT?
<hkaiser> Yorlik: here
<Yorlik> You actually had implemented sync_execute
<Yorlik> But it doesn't work.
<hkaiser> what doesn't work?
<Yorlik> I compiled the pr and switched back to the autochunker
<Yorlik> Same error like before
<hkaiser> hmm
<hkaiser> does it invoke sync_execute on your executor?
<Yorlik> Nopüe
<Yorlik> I get no output
<hkaiser> what executor traits do you define? one_way, two_way?
<hkaiser> ahh, you probably copied things from the example
<gonidelis> How can I run/check the tests under `hpx/libs/algorithms/tests`. I get that I need to use `ctest` ....
kale_ has quit [Ping timeout: 260 seconds]
<gonidelis> ?
<hkaiser> ctest allows to specify the targets to run
<Yorlik> hkaiser: is_one_way_executor, is_never_blocking_one_way_executor, is_two_way_executor, is_bulk_one_way_executor, is_bulk_two_way_executor
<rori> weilewei: could you invite me to the GSoC meeting too ? :)
<hkaiser> Yorlik: all of them forward to the embedded executor, no?
<Yorlik> Yes
<weilewei> rori sorry that meeting was for my project only, between me and my project mentors. Or you can start your own meeting with your mentors
<hkaiser> gonidelis: not sure if auto-completion works for ctest targets, though
<hkaiser> I'm not a linux person - others might be able to help
<rori> gonidelis: `ctest -R tests.unit.modules.algorithms`
<hkaiser> weilewei: rori is a mentor ;-)
<rori> if it's unit tests
<rori> or you can replace `unit` by regression if you want the regression tests
<weilewei> rori oh sorry... are you interested in concurrent data structure project as well?
<gonidelis> rori thank you thank you so much :)))
<rori> weilewei: yep ;)
<rori> just as a silent listener :)
<weilewei> rori can you send me your email, please?
<rori> aurianer@cscs.ch
<rori> thanks weilewei
<Yorlik> hkaiser: Moment - need to fix a bug - might have accidentally used wrong build
<hkaiser> gonidelis: you should be able to attach the name of the test to run as well: ctest -R tests.unit.modules.algorithm.for_each_test or somesuch
<weilewei> rori ok, sent it, look forward to seeing you!
<gonidelis> hkaiser yeha get it... just couldn't find that `-R` flag that was needed
<rori> gonidelis: and to see all the targets you are interested in you can do `make help | grep tests.*modules`
<ms[m]> weilewei: rori sorry, should've introduced you two ;) but it seems you've come to an understanding
<gonidelis> ok, just some clarification q's:
<ms[m]> I got it on one of my prs and I think I'm up to date with master...
<gonidelis> 1. there is no modules dir under `tests/unit/...`. So what's that supposed to mean?
kale_ has joined #ste||ar
<ms[m]> gonidelis: tests/unit/... is where all the tests used to be
kale_ has quit [Remote host closed the connection]
<hkaiser> ms[m]: sec
<Yorlik> hkaiser: I'm getting a strange runtime error with the new build - i remember that was the reason why I had switched back quickly and forgotton about that. To make sure the error is not on my side I'll recomnpile the pr, though I think I had my local source tree right.
<rori> gonidelis: the tests related to a modules are located in the module directory i.e. `libs/<module_name>/tests/unit`
<ms[m]> we've been moving things piecewise into libs/modulename/tests/unit
<ms[m]> but there are still quite a few tests left in tests/unit
<ms[m]> they just don'w belong to any particular module yet
<ms[m]> hkaiser: np, irc overload!
<rori> gonidelis: and the tests.unit.modules.<> is just a cmake target
<jbjnr> can drop boost to 1.64 if you want
<gonidelis> rori
<gonidelis> So we write the tests on source (HPX) at `libs/<module_name>/tests/unit` and then these tests are compiled through the build_directory as this is where their target is specified?? =# =#
<gonidelis> Also I can see that some headers lie at `libs/algorithms/tests/unit/container_algorithms` while there exists a `libs/algorithms/include/...` directory which leads to corresponding headers... (???)
<rori> gonidelis: So if you look at the CMakeLists of the tests/unit of the affinity module for example you do a `add_hpx_unit_test` (see [here](https://github.com/STEllAR-GROUP/hpx/blob/master/libs/affinity/tests/unit/CMakeLists.txt#L25)) which is a cmake function (specified [here](https://github.com/STEllAR-GROUP/hpx/blob/master/cmake/HPX_AddTest.cmake#L192))
<rori> And if you follow the call hierarchy you see that we add the test target for example `tests.unit.modules.affinity` and that we make the dependency to `tests.unit.modules` so that it's called if you just `make tests.unit.modules` (see [here](https://github.com/STEllAR-GROUP/hpx/blob/master/cmake/HPX_AddTest.cmake#L164-L167)))
<rori> gonidelis: Not sure I understood your second question ^^
<hkaiser> ms[m]: I didn't fix this warning, just ignored it ;-)
<Yorlik> hkasier: It works - don't ask what I did wrong - no one knows ...
<Yorlik> hkaiser ^^
<hkaiser> Yorlik: \o/
<Yorlik> I'm in the process of overhauling our build system - it actually get better, but I think some bone splinters got spilled while the sausage was made :)
<gonidelis> rori WOW!!!! Great! Thank you. The spaghetti unwrapping just amazes me so much... thanks. As for the second question: I can see that there are some hpp's under `/libs/algorithms/tests/unit/container_algorithms` for example. But there is also a directory that I reckon is used for includes (headers) at
<gonidelis> `/libs/algorithms/include/hpx/parallel/container_algorithms`... what's the difference? sory if i confused you
<ms[m]> hkaiser: sneaky ;) but thanks
<ms[m]> I'll see if I understand where it's coming from, otherwise we should exclude that file...
<hkaiser> ms[m]: I think it's a problem in cmake-format itself
<hkaiser> Yorlik: could you comment on the PR, please?
<Yorlik> Yep - sry
<ms[m]> ok
<Yorlik> Did test in RelWithDebInfo only
<rori> gonidelis: so as the path indicates if you find some headers under `/tests/unit/container_algorithms` it means that it is headers using for testing, there is usually the `test` word in those to avoid confusion
<rori> The others are the headers of the `algorithms` module
<Yorlik> hkaiser: Done !
<hkaiser> thanks
<Yorlik> heller1: yt?
karame_ has quit [Quit: Ping timeout (120 seconds)]
<gonidelis> rori ah my bad... stupid overlook. Help appreciated a lot...
<rori> no worries ;)
<Yorlik> hkaiser: Any suggestion how to debug these 2 leaks? https://gist.github.com/McKillroy/c8740bf331a46afb26fc44ba65e7b337
<hkaiser> Yorlik: are you sure you're not slicing the msg by static_cast'ing it?
<Yorlik> The difference betwee base and derived is just the type
<Yorlik> All the data is in base
<Yorlik> One leak I fixed
<Yorlik> I had forgotten to check futures on living objects
<Yorlik> I only checked on destruction
<Yorlik> Actually it seems both are gone
<hkaiser> which leak is still there?
<hkaiser> lol
<Yorlik> I was just accumulating dead futures and messages
<Yorlik> Now I'm doing a check on every object update if the object has a mailbox and is active
<Yorlik> Seems I need to do these things a bit more orderly ... :)
<Yorlik> The good news is: It wasn't really anything serious or problematic - just an oversight and getting used to cleaning up more regularly :)
<heller1> Yorlik: what up?
<Yorlik> heller1: I had made another trace, but I found the reason of the leaks.
<Yorlik> I had forgotten to not only check open futures on objects I destroyed, but living objects also
<Yorlik> I just added the checking function which finalizes the async requests to the updates.
<Yorlik> So each object which actually receives an update will getit's open futures checked.
<Yorlik> And all objects on shuitdown before destruction.
<Yorlik> So - it's resolved. :D
<heller1> Alright! In the meantime, I found one leak out of hpx
<heller1> I'm still hunting another one
<Yorlik> There are tiny leaks which are probaly hpx - but not that dimension of what I saw.
<Yorlik> If you want traces I can provide you with some.
<heller1> Care to share the back trace?
<Yorlik> I also decided to buy Deleaker when the trial runs out - it's really a nice and affordable tool.
<heller1> One I found was coming from composable_guard
<Yorlik> NP - Care a quick screenshare to selct what you really need?
<heller1> In an hour or so?
<Yorlik> Sure - poke me - I'm around.
<heller1> Great
<Yorlik> Do you have teamviewer?
<Yorlik> It has best screen quality of all free tools i know of.
gonidelis has quit [Remote host closed the connection]
<heller1> Yorlik: ready whenever you are
<Yorlik> I'm ready
<heller1> which venue? zoom works nicely for me
<Yorlik> OK - just send me a link
<hkaiser> heller1: yt?
<heller1> hkaiser: what up?
<hkaiser> heller1: would you have some time to talk at some point?
<heller1> hkaiser: sure! once Yorlik is done?
<hkaiser> any time
<Yorlik> Done ;)
<heller1> hkaiser: ready whenever you are
<bita_> hkaiser, 1158 relies on 1159. So I will rebase that after 1159 is merged if it is Okay
<heller1> hkaiser: if you want, we can use the same link as above
nikunj97 has quit [Read error: Connection reset by peer]
<heller1> hkaiser: ping?
<heller1> calling it a day now...
<hkaiser> bita_: sure
<Yorlik> And another bunch of leaks cleared ... seems I'm getting orderly today :)
rtohid has left #ste||ar [#ste||ar]
<bita_> hkaiser, can I ask a question?
<hkaiser> sure
<bita_> I am trying to find where we decide that we can store to slice but not to slice_column/slice_row..., so I would be able to redirect them to slice_assign too. Would you please guide me where should I look
<hkaiser> bita_: not sure I understand
<bita_> I even looked into variable, but cannot see eher it happens
<hkaiser> ahh, I think I know what you mean
<hkaiser> the store primitive understands slicing
<bita_> store(slice(a,1,0),val) works but store(slice_column(a,0),val) doesn't
<hkaiser> and the physl compiler is doing some trickery to make it happen
<bita_> yes...
<hkaiser> store(slice(a,1,0),val) is translated to store(a, val, 1, 0)
<bita_> which file does this translation?
<hkaiser> aren't store_row/store_column just shortcuts for slice?
<bita_> yes, but unfortunately they are only able to extract, not assign
nikunj has quit [Remote host closed the connection]
<hkaiser> sec
nikunj has joined #ste||ar
<hkaiser> I mean, shouldn't it be possible to rewrite slice_row(...) as store(..., ???) ?
<hkaiser> similarily for slice_column
<bita_> if I know where the translation is , I would be happy to write it :")
<bita_> thank you
<hkaiser> right
<hkaiser> not sure if this helps, though
<bita_> I will dig it up
<hkaiser> bita_: I think slice_row is very similar to slice with a None argument: https://github.com/STEllAR-GROUP/phylanx/blob/master/src/execution_tree/compiler/compiler.cpp#L876-L957
<hkaiser> same for slice_column - so you should be able to just use slice instead
<bita_> you mean I don't need to change slice_column to be able to be assigned to?
<bita_> hkaiser, ^^
<hkaiser> yes
<hkaiser> you should be able to use slice() in the kmeans code instead of slice_column
<hkaiser> ritgh, so what are you trying to achieve, then?
nan11 has quit [Ping timeout: 245 seconds]
weilewei has quit [Ping timeout: 245 seconds]
<bita_> but I think it would be really better if we have that as you see half of it uses extra lists
<hkaiser> nod
<bita_> When I change all slice_column/slice_row with slice the performance is really down even in that example with 30 points
<hkaiser> I can have a look, if you want - need to remind myself what we did for slice, however ;-)
<bita_> I appreciate that, I am trying to find things that I can do ;)
<hkaiser> ahh, then go ahead ;-)
<bita_> sure, I will let you know if I failed
<hkaiser> ok
<hkaiser> bita_: btw, I'll have a phone call with Andrew tomorrow to talk about Krylov methods and tings
<hkaiser> so he's finally back
<bita_> great
<hkaiser> I'll keep you in the loop, he was mumbling something about a possible small project over the summer
<bita_> :+1
<hkaiser> bita_: I think the main entry point for handling slice() in the compiler is here: https://github.com/STEllAR-GROUP/phylanx/blob/master/src/execution_tree/compiler/compiler.cpp#L1406
<hkaiser> slice_column/_row could be handled similarly
<bita_> uhum, thanks
<bita_> got it
nan11 has joined #ste||ar
<hkaiser> bita_: let me repeat, instead of writing slice_column(centroids, 0) in the kmeans code you should be able to write slice(centroids, nil, 0), correspondingly, instead of slice_row(x, 1) you should be able to write slice(x, 1, nil)
<hkaiser> those are equivalent
<hkaiser> same when used with store()
<Yorlik> Seams I'm almost leak free now. I let 100k object update for some minutes and only have some single count leaks from stuff that is not significant or not my responsibility.
<hkaiser> at least as long as the variable is 2d - but for others slice_colum/row are undefined anyways
<hkaiser> Yorlik: the c library leaks things as well, mostly objects in global scope
<Yorlik> Yes - there is some weird stuff.
<bita_> Okay, so I use slice & nil instead of what I did in https://github.com/STEllAR-GROUP/phylanx/issues/1162 and leave it be
<Yorlik> I'll comb through the remaining löeaks later - I'm pretty happy with the state of the leak situation now
<hkaiser> bita_: might be the easiest - but feel free to dig around in the compiler if you like ;-)
<bita_> I am trying to make slice distributed, so it was fun reading those files. I am changing it in 1167 and if it was a performance issue, we can change that later
<hkaiser> ok
<Yorlik> I'm getting a " boost::wrapexcept<boost::bad_any_cast>" exception when trying to use --hpx:print-counter=/threads{locality#0/total}/idle-rate
<Yorlik> Counters are on in my compile settings (which is didn't change anyways)
<Yorlik> Is something wrong with the param syntax?
<Yorlik> I wonder if it's somehow conflicting with my use of program-options - need to check this
nan11 has quit [Remote host closed the connection]