aserio changed the topic of #ste||ar to: STE||AR: Systems Technology, Emergent Parallelism, and Algorithm Research | stellar.cct.lsu.edu | HPX: A cure for performance impaired parallel applications | github.com/STEllAR-GROUP/hpx | Buildbot: http://rostam.cct.lsu.edu/ | Log: http://irclog.cct.lsu.edu/
Smasher has joined #ste||ar
gedaj has joined #ste||ar
gedaj has quit [Client Quit]
<hkaiser>
parsa: yah, I just retriggered it, it should go through now
<hkaiser>
hpx master passed on circleci 1 hour ago
<jbjnr>
I tried comoiling stuff on windows for the firt time in N years and I cannot make it run
<heller_>
hmm
<jbjnr>
from the terminal, it runs, but from the debugger it can't find DLL's and I've set the env PATH in the visual studio GUI, but I've not used it for so long, I can't make it work
<jbjnr>
I'll have lunch and then try again after
david_pfander has joined #ste||ar
hkaiser has joined #ste||ar
K-ballo has joined #ste||ar
<heller_>
jbjnr: you should probably ask the windows guys
david_pfander has quit [Quit: david_pfander]
<jbjnr>
heller_: I fixed it after realizing that I'd gone stupid for real
<hkaiser>
simbergm: have you seen #3166?
<hkaiser>
(I'll handle #3165)
<simbergm>
hkaiser: I have, I saw that behaviour as well but it should be fixed after my thread pool suspension PR
<simbergm>
I'll add a comment asking them to try a newer (but not the newest) commit
<hkaiser>
ok, thanks
<hkaiser>
I think they used top of master
<simbergm>
will check
<marco>
Hi, I'am back.
david_pfander has joined #ste||ar
<marco>
My performance issue with the special 1d stencil is gone away. I have updated the clang compiler to the aocc 1.1 and to the current hpx master version.
<hkaiser>
nice
<marco>
fyi, there exists a bug in the intel compiler 18.0.1; "hpx::parallel::for_loop( hpx::parallel::par, 0u, 128u, []( auto ) {} );" is not translatable. An internal error in the icpc. intel can reproduce it and it will be resolved in the next release.
<heller_>
nice
<simbergm>
heller_, hkaiser: I'm trying to fix examples and understand components a bit better at the same time
<simbergm>
create is async right? is there a way to wait for the async create to have finished?
<heller_>
.get()?
<heller_>
or well, this->get_gid()
<simbergm>
get_gid is actually the on that fails later
<simbergm>
if (!shared_state_) throw(...)
<simbergm>
if I create the component/client/(?) with new_ it's happy
<simbergm>
but wondering if something has changed in semantics or if the example was always broken
<simbergm>
nqueen has a similar (but easier) bug where I'm not sure if hpx changed or if it was always broken
<simbergm>
in general has someone checked that the examples run for previous releases?
<K-ballo>
heh
<K-ballo>
I did, once, about 5 years ago
<hkaiser>
heller_: .get()
<hkaiser>
please create components with new_<>, always
<heller_>
i think it would make sense to include the examples in unit testing
<hkaiser>
the examples are probably outdated
<simbergm>
okay, so that's likely a relic of old times
<marco>
As a newbie I have another short question for a correct structure for a fast migration to hpx of existing code; Our existing code has three nested parallelisation layer / loops: over nodes, over threads and vectorizing. Is it possible to create three nested hpx loops for the same scopes?
<hkaiser>
marco: sure
<hkaiser>
nested parallelism is not an issue
<hkaiser>
just make sure the tasks don't end up being too small
<heller_>
woohoo, another (almost) complete green buildbot cycle
<simbergm>
it's all your fault heller_...
<heller_>
I know!
sam29 has joined #ste||ar
<heller_>
I'd really like to have the partitioned_vector compilation problems fixed for this release
<heller_>
one of the circle-ci build failed because clang got killed due to OOM
<simbergm>
so thread_pool_executors again, would we need to have dynamic pool creation to kick those out?
<simbergm>
what I know so far is that when it hangs, it looks like the thread_pool_executor gets spawned in a separate task, even though that shouldn't happen...
<simbergm>
*thread_pool_executor destructor
<simbergm>
is that possible under normal circumstances? it does call hpx::this_thread::suspend() but no async/apply or anything in the destructors
<hkaiser>
simbergm: is octotiger using the thread_pool_executors?
david_pfander1 has joined #ste||ar
<marco>
hkaiser: It is enough to use for_each( par ) | for_each( par ) | for_each( par_vec ) ? Or must i also specifiy the executors manualy?
david_pfander has quit [Ping timeout: 255 seconds]
david_pfander1 is now known as david_pfander
<heller_>
marco: depends ;)
<heller_>
it picks the default executor
* jbjnr
wants to get rid of wait_or_add_new
<heller_>
jbjnr: on it
<heller_>
jbjnr: give me another hour or two
<jbjnr>
ooh great. I'm leaving in a min, but will check back tonight
<heller_>
I'm there
<jbjnr>
(time is running out for me to fix the small tasks).
<marco>
ok, then i will specify the executors. thank you very much!
<heller_>
jbjnr: I want to have them fixed as well
<simbergm>
hkaiser: I don't know anything about octotiger, I was just asking because of thread_pool_executors_test hanging
diehlpk_work has joined #ste||ar
<diehlpk_work>
Today GSoC's organisations are announced :)
<hkaiser>
simbergm: ahh
eschnett has quit [Quit: eschnett]
david_pfander has quit [Ping timeout: 248 seconds]
sam29 has quit [Ping timeout: 260 seconds]
Smasher has joined #ste||ar
eschnett has joined #ste||ar
parsa has joined #ste||ar
<diehlpk_work>
Remember, the #1 thing our team considers when choosing which orgs to accept is the quality of the Project Ideas list.
<diehlpk_work>
Google focused this year more on the project proposals