<weilewei>
btw, just get an interesting question from the lab, they said, is it possible for hpx to control 1 system threads while many hpx threads running on top of that system threads?
<weilewei>
One guy suspect some resources are thread independant
<weilewei>
simbergm shall I use your commit to test?
<weilewei>
hmmm. not sure why spin_lock is invoked, though it is not my intention
rtohid has left #ste||ar ["Konversation terminated!"]
<weilewei>
btw, just curious, is there any hpx settings that will force only 1 system thread running, and then multiple hpx threads can run on top of that system thread?
<weilewei>
simbergm ok, let me try
<simbergm>
weilewei not sure what you mean other than - - hpx:threads=1?
<weilewei>
hmmm, so using --hpx:threads=1 actually allows only 1 hpx thread to run at a time?
<weilewei>
ok, I see what it means
<zao>
A number of system threads can be turned into something that can process HPX work.
<weilewei>
ok
<zao>
I don't know if you can explicitly make system threads you start yourself into HPX workers.
<zao>
HPX normally spins up a thread pool with workers.
<zao>
Which I'm assuming that command line option controls.
rori has quit [Quit: WeeChat 1.9.1]
<weilewei>
-- Found CUDA: /sw/summit/cuda/10.1.168 (found version "10.1") CMake Error at cmake/HPX_SetupCUDA.cmake:29 (target_link_directories): Cannot specify link directories for target "hpx::vc" which is not built by this project.Call Stack (most recent call first): CMakeLists.txt:1621 (include)
<weilewei>
what is vc?
<weilewei>
so I need to install vc first to enable HPX_WITH_CUDA...
<zao>
Vc is a library for SIMD operations.
<weilewei>
Right, before that Vc is not required for HPX_WITH_CUDA
<simbergm>
weilewei are you specifying HPX_WITH_DATAPAR_VC? if not it's probably my fault... You should also not need (or even be able to use) on summit
<simbergm>
roti did a hpx::vc maybe sneak into HPX_SETUPCUDA?
<weilewei>
simbergm I am not specifying HPX_WITH_DATAPAR_VC on my build
<simbergm>
Ah yes, hpx::vc should be hpx::cuda in that file...
<weilewei>
oh, i see, thanks for that response..
<simbergm>
weilewei could I ask you to do that change and maybe even open a pr?
<simbergm>
Line 29 in cmake/HPX_SetupCUDA
<simbergm>
.cmake
<weilewei>
yes, sue, I would like to, I will do it this afternoon
<simbergm>
Thanks!
<weilewei>
np, I will create that PR after my lunch : )
<simbergm>
weilewei there's a couple of other hpx::vcs in that file that should be changed as well if you don't mind :)
aserio has quit [Ping timeout: 268 seconds]
nikunj97 has quit [Quit: Bye]
K-ballo1 has joined #ste||ar
K-ballo has quit [Ping timeout: 264 seconds]
K-ballo1 is now known as K-ballo
aserio has joined #ste||ar
<weilewei>
simbergm noted and I just created PR to HPX
<hkaiser>
weilewei: thanks a lot!
<weilewei>
hkaiser welcome
<weilewei>
I try to build dca with latest hpx branch + my cuda cmake change, then I got this error:
<weilewei>
CMake Error at cmake/dca_testing.cmake:85 (add_executable): Target "tp_accumulator_gpu_test" links to target "hpx::cuda" but the target was not found. Perhaps a find_package() call is missing for an IMPORTED target, or an ALIAS target is missing?Call Stack (most recent call first):
<weilewei>
Also, it is not relevant to cuda thing, but when I build dca with latest HPX, I have to pass JEMALLOC_LIBRARY JEMALLOC_INCLUDE_DIR to my cmake, otherwise it complains
<weilewei>
-- Could NOT find Jemalloc (missing: JEMALLOC_LIBRARY JEMALLOC_INCLUDE_DIR) CMake Error at /gpfs/alpine/proj-shared/cph102/weile/dev/install/hpx_Debug/lib64/cmake/HPX/HPX_Message.cmake:48 (message): HPX_WITH_MALLOC was set to JEMALLOC, but JEMALLOC could not be found. Valid options for HPX_WITH_MALLOC are: system, tcmalloc, jemalloc, mimalloc,
<weilewei>
Although hpx itself builds successfully, and can find jemalloc. Before hpx changes, I do not need to manually pass JEMALLOC_LIBRARY JEMALLOC_INCLUDE_DIR to my dca cmake
<hkaiser>
simbergm: ^^
aserio has quit [Ping timeout: 246 seconds]
<simbergm>
hkaiser weilewei, urgh, sorry about that
hkaiser has quit [Read error: Connection reset by peer]
hkaiser has joined #ste||ar
<hkaiser>
simbergm: np at all
<simbergm>
I think easiest right now would be to go back to the commit right before we merged the cmake branch and set HPX_HAVE_VERIFY_LOCKS=OFF
<simbergm>
That way we're not blocking you with these problems
<simbergm>
Or just go back to 1.3.0 even
<weilewei>
simbergm thanks for that, I will try it out
<simbergm>
Daint is back online again so we can test and fix the Cuda configuration (again...)
<simbergm>
Tomorrow
aserio has joined #ste||ar
diehlpk_work has quit [Remote host closed the connection]
Coldblackice has joined #ste||ar
Coldblackice has quit [Ping timeout: 264 seconds]
aserio has quit [Ping timeout: 265 seconds]
Coldblackice has joined #ste||ar
hkaiser has quit [Quit: bye]
jaafar has quit [Ping timeout: 245 seconds]
aserio has joined #ste||ar
weilewei has quit [Remote host closed the connection]