aserio changed the topic of #ste||ar to: STE||AR: Systems Technology, Emergent Parallelism, and Algorithm Research | stellar.cct.lsu.edu | HPX: A cure for performance impaired parallel applications | github.com/STEllAR-GROUP/hpx | Buildbot: http://rostam.cct.lsu.edu/ | Log: http://irclog.cct.lsu.edu/
parsa has quit [Quit: Zzzzzzzzzzzz]
EverYoun_ has joined #ste||ar
EverYoung has quit [Remote host closed the connection]
shoshijak has quit [Ping timeout: 246 seconds]
vamatya has quit [Ping timeout: 246 seconds]
EverYoun_ has quit [Remote host closed the connection]
EverYoung has joined #ste||ar
EverYoung has quit [Remote host closed the connection]
<taeguk> What is a good way for sharing variable among tasks in HPX?
<taeguk> I'm writing parallel is_heap_until of HPX. So if an user call is_heap_until with execution::par(execution::task), is_heap_until returns instantly and is executed asynchronously.
<taeguk> That means that I can't use raw pointer or reference for sharing variable among 'hpx::future's.
<taeguk> I think there is good way for this, maybe. But I don't know what it is.
<K-ballo> what kind of data you wish to share?
<taeguk> maybe two std::size_t
<taeguk> The reason I want to share variables is that I want to get data to process dynamically.
<K-ballo> and you want to share, as in the full meaning of share, have a single instance of each accessible by all tasks to write to?
<taeguk> yes right.
<taeguk> in fact, I think using just threads seems better than using future and dataflow for what I want to do.
<K-ballo> that's an odd request, sharing is usually not good for parallel execution
<taeguk> K-ballo: yes, I know. as I mention above, I think using just threads is better for my algorithm.
<K-ballo> unlikely, but if you feel it would then take it out for a spin, give it a try
<taeguk> One of what I want is getting data to process dynamically.
<K-ballo> I don't know what that means, can't imagine how it would process statically otherwise
<taeguk> And the other is stopping all tasks(futures) if there is no necessary to process more.
<K-ballo> a cancellation token?
<taeguk> yes maybe
<K-ballo> "don't bother anymore, I already know the answer"
<taeguk> what do you mean?
<K-ballo> I'm trying to represent the semantics of a cancellation token, when one would want to use it
<K-ballo> when you are computing an answer in parallel, say looking for an element in a sequence, once you find it there's no reason for the others to keep looking
<taeguk> That' alright.
<taeguk> that is what I want.
<K-ballo> look for cancellation_token
<taeguk> Thank you! it is so helpful!
<taeguk> but, I have a one more thing to resolve.
<taeguk> I want getting partition dynamically.
parsa has joined #ste||ar
<taeguk> The objective is that in array, finding leading element which is not conforming with specific condition.
<taeguk> Surely, I can use just static_partitioner for this objective.
<K-ballo> never used the partitioners myself
<taeguk> But it can be inefficient. For example, there is 4 cores and 12 elements. The partitions are [0,2], [3, 5], [6, 8], [9, 11]. If leading element is at 2, using static_partitioner is inefficient
parsa has quit [Quit: Zzzzzzzzzzzz]
<hkaiser> taeguk: what do you mean by using 'justthreads' instead of dataflow and futures?
<hkaiser> taeguk: I'd advise against using 'just threads', that will create more trouble than its worth
<hkaiser> if you need concurrrent execution use async or dataflow
hkaiser has quit [Quit: bye]
EverYoung has joined #ste||ar
EverYoung has quit [Ping timeout: 240 seconds]
K-ballo has quit [Quit: K-ballo]
patg has joined #ste||ar
patg has quit [Client Quit]
shoshijak has joined #ste||ar
Matombo has joined #ste||ar
Matombo has quit [Remote host closed the connection]
EverYoung has joined #ste||ar
EverYoung has quit [Ping timeout: 246 seconds]
david_pfander has joined #ste||ar
jaafar has quit [Ping timeout: 246 seconds]
V|r has joined #ste||ar
Vir has quit [Ping timeout: 240 seconds]
Remko has joined #ste||ar
Matombo has joined #ste||ar
taeguk has quit [Ping timeout: 260 seconds]
taeguk[m] has quit [Remote host closed the connection]
thundergroudon[m has quit [Write error: Connection reset by peer]
taeguk[m] has joined #ste||ar
thundergroudon[m has joined #ste||ar
bikineev has joined #ste||ar
pree has joined #ste||ar
<github> [hpx] StellarBot pushed 1 new commit to gh-pages: https://git.io/vHLzn
<github> hpx/gh-pages f1dc84f StellarBot: Updating docs
bikineev has quit [Remote host closed the connection]
bikineev has joined #ste||ar
<github> [hpx] ShmuelLevine opened pull request #2649: Update docs (Table 18) - move transform to end (master...ShmuelLevine-docs-transform-patch-1) https://git.io/vHLwd
Matombo has quit [Remote host closed the connection]
shoshijak has quit [Ping timeout: 246 seconds]
Remko has quit [Remote host closed the connection]
bikineev has quit [Ping timeout: 258 seconds]
bikineev has joined #ste||ar
K-ballo has joined #ste||ar
hkaiser has joined #ste||ar
bikineev has quit [Remote host closed the connection]
pree has quit [Ping timeout: 255 seconds]
shoshijak has joined #ste||ar
<hkaiser> heller_: would you mind responding to my comments to various PRs, please?
<hkaiser> heller_: we're really blocked here
<heller_> hkaiser: master is currently broken as well
<heller_> hkaiser: the timed executor tests
<hkaiser> nod
<hkaiser> heller_: it's just two tests, not all of master
bikineev has joined #ste||ar
<heller_> hkaiser: sure, still broken ;)
<heller_> meaning, you can fix them in the mean time ;)
<hkaiser> sure
thundergroudon[m has quit [Read error: Connection reset by peer]
taeguk[m] has quit [Remote host closed the connection]
<heller_> hkaiser: which PR specifically is blocking you?
thundergroudon[m has joined #ste||ar
<heller_> I am not qualified to comment on the windows fixes ;)
* zao hides
taeguk[m] has joined #ste||ar
EverYoung has joined #ste||ar
<hkaiser> heller_: I didn't ask you to
<hkaiser> you raised questions on the other PRs which I answered
bikineev has quit [Ping timeout: 240 seconds]
<heller_> which I replied to ;)
EverYoung has quit [Ping timeout: 255 seconds]
pree has joined #ste||ar
<github> [hpx] hkaiser pushed 1 new commit to build_with_vcpkg: https://git.io/vHLxr
<github> hpx/build_with_vcpkg 0d4f289 Hartmut Kaiser: Address review comment
<heller_> hkaiser: do you use vpkg? how is it different from nuget?
<hkaiser> nuget is for bnary packages, vcpkg builds things
<hkaiser> works very nicely, build all of hpx (including dependencies etc with one command
<heller_> so you use it extensively?
<hkaiser> I don't use it, but it's a good way for windows people to get hpx installed
<hkaiser> what is the problem with that code?
<hkaiser> ^^
<heller_> shouldn't it read if( (NOT HPX_MSVC) OR HPX_WITH_BOOST_ALL_DYNAMIC_LINK)?
<hkaiser> ok, fine by me
pree has quit [Read error: Connection reset by peer]
diehlpk_work has joined #ste||ar
<shoshijak> Hi hkaiser: I got HPX to create and run multiple pools (for the moment they just run the scheduling loop, but no work is actually scheduled on the special pools, I will have to look at how to do this with executors later on.). For now, I'm looking into the performance counters which I commented out in threadmanager.cpp. Taking for example the "queue-length" counter, does it make sense if I modify the "queue_length_creator
<shoshijak> " function so that it looks a lot like avg_idle_rate_creator, and write a function threadmanager::get_queue_length which sums the queue lengths of all the thread pools?
<shoshijak> I could then also modify queue_length_counter_creator so that it queries get_queue_length of a specific pool, specified by the user with "/threadqueue{locality#%d/pool%d}/length" or something like that
<hkaiser> shoshijak: I think this will require more changes
<hkaiser> counter instances will have to support upto three parts now
<hkaiser> locality#N/pool#N/thread#N
<hkaiser> and the cumulative counters should support locality#N/total and locality#N/pool#N/total
<hkaiser> also, should we keep supporting the current names (locality#N/thread#N)?
<shoshijak> I didn't think of that ... I thought we could just have locality#N/pool#N and locality#N/thread#N
pree has joined #ste||ar
<hkaiser> which of the threads would locality#N/thread#N refer to?
<hkaiser> just pool#0?
<shoshijak> hkaiser: supporting locality#N/thread#N still makes sense semantically. It's just confusing if in locality#N/thread#N, the whole set of threads is numbered from 0...N_total, and in locality#N/pool#N/thread#N, threads are numbered from 0...N_pool
<hkaiser> I wouldn't number all threads consecutively
<shoshijak> hkaiser: you're right, the user shouldn't have to know the order of the pools or something like that
<hkaiser> thread should be numbered relative to their pool, I think - not sure
<shoshijak> but then locality#N/thread#N would refer to...?
<hkaiser> pool#0?
<hkaiser> as before?
<hkaiser> not sure
<shoshijak> I guess that makes sense
pree has quit [Read error: Connection reset by peer]
<shoshijak> hkaiser: supporting counter instances with 3 parts (ie locality#N/pool#N/thread#N) means I have to modify the struct hpx::performance_counters::counter_path_elements?
<hkaiser> nod, and a lot of the code :/
<shoshijak> I see ... in your opinion, should I first give that a try, or first try scheduling work on those special pools using executors?
<hkaiser> shoshijak: first you should make jbjnr happy by giving him a bunch of cores he can control and leave the rest to hpx
<hkaiser> provide some means of controlling this through an API while maintaining backwards compatibility to existing code
<hkaiser> (without any additional code the user would get what we had so far)
<shoshijak> ok. You said on the Github issue that executors were the way to go in order to give HPX-tasks to these special pools. Is there some already-existing executor I should have a look at that does something similar to this?
<hkaiser> shoshijak: sure, hold on
aserio has joined #ste||ar
<hkaiser> this schedules things on the current pool
<shoshijak> not the thread_pool_executor, thread_pool_os_executor, or thread_pool_attached_executor type of things?
<hkaiser> I think it currently relies on register_work or somesuch, so this would require some lower-level API changes
<hkaiser> yah, those are variations on the theme ;)
<hkaiser> thread_pool_os_executor creates a new thread_pool
<hkaiser> so this might be a possible thing to look at as well
<shoshijak> Great, I'll look at all this and give it a try. Thank you :)
<hkaiser> shoshijak: good luck!
<hkaiser> and thanks!
pree has joined #ste||ar
eschnett has quit [Quit: eschnett]
pree has quit [Read error: Connection reset by peer]
<K-ballo> zao: maybe we can leverage cmake's cxx features and skip some of the tests sometimes
<zao> K-ballo: Does that do The Right Thing based on the C++ language level chosen?
<zao> User can still override that, right?
<K-ballo> no idea...
<K-ballo> we'd still want to run the tests sometimes, so it would not be trivial
<zao> Two minutes of CMake step isn't the whole world, but if there's cheap wins...
pree has joined #ste||ar
pree has quit [Read error: Connection reset by peer]
<hkaiser> K-ballo: how can one access the cmake's cxx features?
<hkaiser> ahh, cool
akheir has joined #ste||ar
<K-ballo> it's just a list of hardcoded known results
<hkaiser> we could try to associate those with our required features and skip our test if cmake says it's supported
<K-ballo> nod
<hkaiser> still running our test if cmake think it's not supported
<K-ballo> although.. it'd be better to still run our feature-tests custom tailored for our needs
<K-ballo> I don't think 2 minutes is reasonable
<hkaiser> nod
<K-ballo> something has to be off there
<hkaiser> K-ballo: my experiments with vcpkg recently showed that doing configure takes about as long as building all of hpx core afterwards
<K-ballo> what? that's nonsense!
<K-ballo> it's seriously broken
<hkaiser> it's not :/
<hkaiser> not nonsense, I mean
<K-ballo> heh, I don't mean is not real, I mean it shouldn't possibly be
<hkaiser> things like testing for auto can be taken from cmake, so can other things be used, I'm sure
<hkaiser> deleted functions as well
<hkaiser> so parts of our feature tests could prefer looking at cmake's features, still falling back to our tests
<hkaiser> that would also cover for older cmake versions with a reduced set of features
* K-ballo fears we'd just be silencing a symptom
<hkaiser> our feature tests are run on a single core, sequentially that makes it slow
eschnett has joined #ste||ar
<K-ballo> compiling is slow but it's not *that* slow
<hkaiser> you're right
<K-ballo> I'll look into it, we might be doing a lot more than we should have been doing.. wouldn't be a first
<hkaiser> heh
<mcopik> hkaiser: for segmented algorithms, all new tests should be put in unit/component/partitioned_vector_${algo_name}.xxx?
<hkaiser> mcopik: good question
<mcopik> so far all tests end up there
<mcopik> an alternative would be unit/parallel/segmented/${algo_name}.xxx
<mcopik> but we have them working only on a partitioned_vector
<hkaiser> mcopik: I'd rather movve all of them to tests/unit/parallel/segmented_algorithms
pree has joined #ste||ar
<mcopik> hkaiser: ok, I'll let my student know
<hkaiser> thanks
<K-ballo> zao: we don't do feature-tests for features of a standard newer than the one specified anymore, so that part should be ok
* zao shakes a fist at HDF5
<zao> They apparently change public interfaces in micro versions.
<zao> Good thing HPX doesn't use it ;)
<hkaiser> zao: the C++ wrappers are not thread safe (by default)
<hkaiser> requires special config options, so you can'tuse system-installed HPF5 binaries with hpx
hkaiser has quit [Quit: bye]
EverYoung has joined #ste||ar
EverYoung has quit [Remote host closed the connection]
EverYoung has joined #ste||ar
<pree> whether hpx currently supports any feature to explicitly " cancel the future " means cancelling the async operation which is currently running ?without exception
pree has quit [Read error: Connection reset by peer]
hkaiser has joined #ste||ar
akheir has quit [Remote host closed the connection]
<hkaiser> heller_: master should be fine now
<aserio> hkaiser: are you working from home today?
<hkaiser> aserio: waiting for my car, will come in as soon as I have it back
<aserio> hkaiser: Question does hpx::components::copy do anything special? Or does it simple create a new component and copy the data over?
<hkaiser> that's what it does, yes
<aserio> I wasn't certain if there were any optimizations applied
<hkaiser> it serializes the source and constructs a new instance by deserializing the data on the destination
<hkaiser> aserio: what optimization did you have in mind?
<aserio> I didn't, I was just thinking about the checkpointing stuff
<aserio> It sounds like I should look at the implementation
<hkaiser> it's simple enough if you look from a certain distance
denis_blank has joined #ste||ar
<mcopik> aserio: do you need slides from my presentation?
<aserio> mcopik: only if you want me to post it :)
hkaiser has quit [Quit: bye]
<mcopik> aserio: nothing to hide. I remember that you wanted slides from C++Now
<aserio> mcopik: Yes, we like to post the slides along with the citation on our website
<mcopik> aserio: for publication it's http://dl.acm.org/citation.cfm?doid=3078155.3078187
<mcopik> I'll send slides in an email
ajaivgeorge has joined #ste||ar
pree has joined #ste||ar
pree has quit [Client Quit]
pree has joined #ste||ar
bikineev has joined #ste||ar
akheir has joined #ste||ar
pree has quit [Read error: Connection reset by peer]
<aserio> mcopik: did you send me the paper as well?
<aserio> mcopik: Or can you download it for free?
<mcopik> I don't know if it's open access
<mcopik> I'll send you the paper in a second
<aserio> I can confirm it is not :p
<mcopik> aserio: ^
<mcopik> aserio: done
<aserio> thanks!
pree has joined #ste||ar
pree has quit [Client Quit]
<aserio> mcopik: Publications page updated!
<mcopik> aserio: thanks
pree has joined #ste||ar
hkaiser has joined #ste||ar
pree has quit [Ping timeout: 260 seconds]
pree has joined #ste||ar
aserio has quit [Ping timeout: 272 seconds]
hkaiser has quit [Quit: bye]
david_pfander has quit [Ping timeout: 246 seconds]
<diehlpk_work> Update for GSoC: May 30 Coding officially begins! Only one week left :)
shoshijak has quit [Ping timeout: 246 seconds]
<zao> Soon[TM]
<pree> working @diehlpk_work
jaafar has joined #ste||ar
<diehlpk_work> Perfekt pree
vamatya has joined #ste||ar
<pree> People @LSU. How to increase the no:of:localities for the applicatiuon
<pree> and also how to increase the no:of:threads greater than 64 in rostam. Plz do reply
akheir has quit [Remote host closed the connection]
ajaivgeorge has quit [Ping timeout: 246 seconds]
EverYoung has quit [Ping timeout: 272 seconds]
bikineev has quit [Read error: Connection reset by peer]
pree has left #ste||ar ["Ex-Chat"]
bikineev has joined #ste||ar
pree has joined #ste||ar
pree has quit [Quit: bye]
eschnett has quit [Quit: eschnett]
atrantan has joined #ste||ar
<atrantan> heller_, yt:?
hkaiser has joined #ste||ar
aserio has joined #ste||ar
hkaiser has quit [Client Quit]
aserio has quit [Client Quit]
aserio has joined #ste||ar
EverYoung has joined #ste||ar
bikineev has quit [Remote host closed the connection]
<mcopik> does anyone know why transform returns a tagged pair instead of OutIter?
atrantan has quit [Quit: Quitte]
<K-ballo> tagged pair sounds like a Ranges TS thing
<K-ballo> is that just for the range/container versoin of transform?
eschnett has joined #ste||ar
<mcopik> it's the usual implementation
<mcopik> not the one in container_algorithms
aserio1 has joined #ste||ar
aserio has quit [Ping timeout: 272 seconds]
aserio1 is now known as aserio
david_pf_ has joined #ste||ar
aserio has quit [Ping timeout: 260 seconds]
aserio has joined #ste||ar
<aserio> mcopik: yt?
<mcopik> aserio: yup
<aserio> see pm
<mcopik> problems coming?
<aserio> david_pf_: yt?
eschnett has quit [Quit: eschnett]
<david_pf_> aserio: somewhat :)
<aserio> lol see pm
* zao looks at the prime minister, confused
bikineev has joined #ste||ar
<Smasher> of which land?
hkaiser has joined #ste||ar
<K-ballo> Smasher: so how's work?
<zao> Whichever one works for the joke :)
<Smasher> K-ballo whew... a very good question
<K-ballo> I'm very good at questions, answers is where I struggle
eschnett has joined #ste||ar
aserio has quit [Quit: aserio]
david_pf_ has quit [Quit: david_pf_]
bikineev has quit [Remote host closed the connection]
bikineev has joined #ste||ar
bikineev has quit [Remote host closed the connection]