hkaiser changed the topic of #ste||ar to: STE||AR: Systems Technology, Emergent Parallelism, and Algorithm Research | stellar.cct.lsu.edu | HPX: A cure for performance impaired parallel applications | github.com/STEllAR-GROUP/hpx | Buildbot: http://rostam.cct.lsu.edu/ | Log: http://irclog.cct.lsu.edu/ | GSoC2018: https://wp.me/p4pxJf-k1
anushi has quit [Ping timeout: 260 seconds]
anushi has joined #ste||ar
nikunj1997 has quit [Quit: bye]
anushi has quit [Ping timeout: 256 seconds]
anushi has joined #ste||ar
quaz0r has quit [Ping timeout: 260 seconds]
quaz0r has joined #ste||ar
hkaiser has quit [Quit: bye]
K-ballo has quit [Quit: K-ballo]
nanashi55 has quit [Ping timeout: 240 seconds]
nanashi55 has joined #ste||ar
V|r has quit [Ping timeout: 265 seconds]
V|r has joined #ste||ar
V|r has quit [Ping timeout: 240 seconds]
V|r has joined #ste||ar
anushi has quit [Ping timeout: 260 seconds]
anushi has joined #ste||ar
nikunj has joined #ste||ar
jakub_golinowski has joined #ste||ar
jaafar has quit [Ping timeout: 240 seconds]
anushi has quit [Remote host closed the connection]
jakub_golinowski has quit [Ping timeout: 240 seconds]
<marco_>
Hello, I've a short question. I need a for_each with a simple ordered parallel execution (sequential but parallel). How is the correct way?
anushi has quit [Ping timeout: 248 seconds]
anushi has joined #ste||ar
<hkaiser>
marco_: what is a 'sequential but parallel' execution?
anushi has quit [Read error: Connection reset by peer]
anushi has joined #ste||ar
<marco_>
4 threads, range 1:40, execution -> 1,2,3,4,5,6,7,8,9,... not in chunks ( or chunk size 1)
<hkaiser>
where is the parallelism in that?
<hkaiser>
you want for the iterations to run sequentially?
aserio has joined #ste||ar
<marco_>
Sorry for my confusing expression, ... The items should be executed in order. At the moment I use a simple one hpx::parallel::for_each( hpx::parallel::execution::par, ... )
<K-ballo>
uh? if the execution is in order, where is the parallelism?
<K-ballo>
I doubt you want to keep 3 threads idle waiting for the current element to finish execution before one of them starts executing the next
eschnett has joined #ste||ar
<hkaiser>
marco_: I agree with K-ballo, pls elaborate what you want to achieve
hkaiser has quit [Quit: bye]
<marco_>
No, I want only a ordered execution. If one element is finish the next in the list should be started, sorry but it is too simple, ...
<K-ballo>
so you don't want parallelism at all then? that makes more sense, but why ask for it in the first place?
<marco_>
all threads should work, but ordered without sleeps. there are no direct dependencies between the elements.
<K-ballo>
you are not making any sense, if execution is overlapping then it follows it is at least partially unordered
<K-ballo>
do you perhaps want the execution for each element to START in sequence order?
<K-ballo>
that is, don't start executing element `i + 1` before element `i` has started executing, but potentially before element `i` has ended executing
<marco_>
yes, only START execution in sequence order
rtohid_ has joined #ste||ar
eschnett has quit [Quit: eschnett]
hkaiser has joined #ste||ar
rtohid_ has quit [Remote host closed the connection]
jakub_golinowski has quit [Ping timeout: 256 seconds]
aserio has quit [Ping timeout: 245 seconds]
eschnett has joined #ste||ar
jakub_golinowski has joined #ste||ar
aserio has joined #ste||ar
jaafar has joined #ste||ar
<hkaiser>
V|r: do you plan to implement N4755 in Vc?
<hkaiser>
Vir: ^^
anushi has quit [Ping timeout: 256 seconds]
eschnett has quit [Quit: eschnett]
anushi has joined #ste||ar
<jakub_golinowski>
M-ms, I pushed the hpx_opencv_webcam example to the repo. It is very simple to the load_image example as the opecv allows for very easy image capture
eschnett has joined #ste||ar
K-ballo has quit [Quit: K-ballo]
aserio has quit [Ping timeout: 240 seconds]
Anushi1998 has joined #ste||ar
<jakub_golinowski>
M-ms, is it possible to call execute hpx::apply(... on an arbitrary thread pool?
aserio has joined #ste||ar
aserio1 has joined #ste||ar
<hkaiser>
heller: skype?
aserio has quit [Ping timeout: 260 seconds]
aserio1 is now known as aserio
<heller>
hkaiser: yes
nikunj has joined #ste||ar
K-ballo has joined #ste||ar
nanashi55 has quit [Ping timeout: 240 seconds]
nanashi55 has joined #ste||ar
nanashi55 has quit [Ping timeout: 256 seconds]
nanashi55 has joined #ste||ar
nanashi55 has quit [Ping timeout: 240 seconds]
nanashi55 has joined #ste||ar
nanashi55 has quit [Ping timeout: 265 seconds]
Anushi1998 has quit [Quit: Bye]
nanashi55 has joined #ste||ar
<github>
[hpx] K-ballo force-pushed logging from 62d0e25 to dde4b04: https://git.io/vx6Yc
<jakub_golinowski>
somehow before rebasing agains master the webcam was not working
quaz0r has quit [Ping timeout: 268 seconds]
jakub_golinowski has quit [Ping timeout: 276 seconds]
mcopik has quit [Ping timeout: 264 seconds]
quaz0r has joined #ste||ar
eschnett has quit [Quit: eschnett]
<M-ms>
jakub_golinowski: ok, will do
<M-ms>
is martycam still not compiling? but your example webcam capture is compiling?
<M-ms>
regarding hpx::apply, yes, that's possible
<M-ms>
the service pools might have some restrictions but for normal hpx pools apply is just async specialized to not returning a future
<M-ms>
did you have some problems with it?
<M-ms>
jakub_golinowski: btw, they might not be able to reopen the PR unless you push a branch with the same name
<M-ms>
in this case you could have force pushed the rebased branch (someone who based work on your old un-rebased branch would have problems, but that's unlikely in these situations)
quaz0r has quit [Ping timeout: 256 seconds]
quaz0r has joined #ste||ar
hkaiser has joined #ste||ar
nanashi64 has joined #ste||ar
nanashi55 has quit [Ping timeout: 260 seconds]
nanashi64 is now known as nanashi55
aserio has quit [Quit: aserio]
galabc has joined #ste||ar
diehlpk_mobile has joined #ste||ar
eschnett has joined #ste||ar
diehlpk_mobile has quit [Read error: Connection reset by peer]
diehlpk_mobile has joined #ste||ar
diehlpk_mobile has quit [Read error: Connection reset by peer]