hkaiser changed the topic of #ste||ar to: STE||AR: Systems Technology, Emergent Parallelism, and Algorithm Research | stellar.cct.lsu.edu | HPX: A cure for performance impaired parallel applications | github.com/STEllAR-GROUP/hpx | Buildbot: http://rostam.cct.lsu.edu/ | Log: http://irclog.cct.lsu.edu/ | GSoD: https://developers.google.com/season-of-docs/
<weilewei> btw, just get an interesting question from the lab, they said, is it possible for hpx to control 1 system threads while many hpx threads running on top of that system threads?
<weilewei> One guy suspect some resources are thread independant
<weilewei> simbergm shall I use your commit to test?
<weilewei> hmmm. not sure why spin_lock is invoked, though it is not my intention
rtohid has left #ste||ar ["Konversation terminated!"]
<weilewei> Is hpx stable tag gone? https://github.com/STEllAR-GROUP/hpx/tree/stable is not accessable
<weilewei> sorry if I throw too many questions here...
<weilewei> The latest stable tag is 1db0e523eefb955780fdf8ed9e6572040e507730, on Oct 17
jaafar has joined #ste||ar
<weilewei> oops, after checking out simbergm your commit, all cmake build failed... Is it the recent hpx build system change thing?
weilewei has quit [Remote host closed the connection]
jaafar has quit [Ping timeout: 240 seconds]
K-ballo has quit [Quit: K-ballo]
jaafar has joined #ste||ar
jaafar has quit [Read error: Connection reset by peer]
jaafar has joined #ste||ar
Guest65867 has quit [Ping timeout: 245 seconds]
jaafar has quit [Read error: Connection reset by peer]
jaafar has joined #ste||ar
jaafar has quit [Ping timeout: 240 seconds]
Guest65867 has joined #ste||ar
Guest65867 has quit [Ping timeout: 268 seconds]
hkaiser has joined #ste||ar
hkaiser has quit [Ping timeout: 276 seconds]
jaafar has joined #ste||ar
jaafar has quit [Ping timeout: 268 seconds]
rori has joined #ste||ar
<simbergm> hmm, pushing the stable tag keeps failing... I'll look into that
Coldblackice has quit [Ping timeout: 240 seconds]
heller has quit [Quit: http://quassel-irc.org - Chat comfortably. Anywhere.]
heller has joined #ste||ar
K-ballo has joined #ste||ar
rori has quit [Read error: Connection reset by peer]
nikunj97 has joined #ste||ar
nikunj has quit [Ping timeout: 264 seconds]
rori has joined #ste||ar
hkaiser has joined #ste||ar
hkaiser has quit [Quit: bye]
weilewei has joined #ste||ar
<weilewei> simbergm thanks!
aserio has joined #ste||ar
aserio has quit [Ping timeout: 245 seconds]
aserio has joined #ste||ar
jaafar has joined #ste||ar
jaafar has quit [Ping timeout: 268 seconds]
<simbergm> weilewei do you have a log of your cmake failure?
<weilewei> simbergm yes, because lots of executables reply on hpx_setup_target, etc.
<weilewei> let me pull it out
<weilewei> need to rebuild hpx.. give me some time
hkaiser has joined #ste||ar
jaafar has joined #ste||ar
<weilewei> please find here
<simbergm> weilewei please try latest master
<weilewei> btw, just curious, is there any hpx settings that will force only 1 system thread running, and then multiple hpx threads can run on top of that system thread?
<weilewei> simbergm ok, let me try
<simbergm> weilewei not sure what you mean other than - - hpx:threads=1?
<weilewei> hmmm, so using --hpx:threads=1 actually allows only 1 hpx thread to run at a time?
<weilewei> ok, I see what it means
<zao> A number of system threads can be turned into something that can process HPX work.
<weilewei> ok
<zao> I don't know if you can explicitly make system threads you start yourself into HPX workers.
<zao> HPX normally spins up a thread pool with workers.
<zao> Which I'm assuming that command line option controls.
rori has quit [Quit: WeeChat 1.9.1]
<weilewei> -- Found CUDA: /sw/summit/cuda/10.1.168 (found version "10.1") CMake Error at cmake/HPX_SetupCUDA.cmake:29 (target_link_directories): Cannot specify link directories for target "hpx::vc" which is not built by this project.Call Stack (most recent call first): CMakeLists.txt:1621 (include)
<weilewei> what is vc?
<weilewei> so I need to install vc first to enable HPX_WITH_CUDA...
<zao> Vc is a library for SIMD operations.
<weilewei> Right, before that Vc is not required for HPX_WITH_CUDA
<simbergm> weilewei are you specifying HPX_WITH_DATAPAR_VC? if not it's probably my fault... You should also not need (or even be able to use) on summit
<simbergm> roti did a hpx::vc maybe sneak into HPX_SETUPCUDA?
<weilewei> simbergm I am not specifying HPX_WITH_DATAPAR_VC on my build
<simbergm> Ah yes, hpx::vc should be hpx::cuda in that file...
<weilewei> oh, i see, thanks for that response..
<simbergm> weilewei could I ask you to do that change and maybe even open a pr?
<simbergm> Line 29 in cmake/HPX_SetupCUDA
<simbergm> .cmake
<weilewei> yes, sue, I would like to, I will do it this afternoon
<simbergm> Thanks!
<weilewei> np, I will create that PR after my lunch : )
<simbergm> weilewei there's a couple of other hpx::vcs in that file that should be changed as well if you don't mind :)
aserio has quit [Ping timeout: 268 seconds]
nikunj97 has quit [Quit: Bye]
K-ballo1 has joined #ste||ar
K-ballo has quit [Ping timeout: 264 seconds]
K-ballo1 is now known as K-ballo
aserio has joined #ste||ar
<weilewei> simbergm noted and I just created PR to HPX
<hkaiser> weilewei: thanks a lot!
<weilewei> hkaiser welcome
<weilewei> I try to build dca with latest hpx branch + my cuda cmake change, then I got this error:
<weilewei> CMake Error at cmake/dca_testing.cmake:85 (add_executable): Target "tp_accumulator_gpu_test" links to target "hpx::cuda" but the target was not found. Perhaps a find_package() call is missing for an IMPORTED target, or an ALIAS target is missing?Call Stack (most recent call first):
<weilewei> test/unit/phys/dca_step/cluster_solver/shared_tools/accumulation/tp/CMakeLists.txt:30 (dca_add_gtest)
<hkaiser> nod
<weilewei> What should I do with this?
<weilewei> Also, it is not relevant to cuda thing, but when I build dca with latest HPX, I have to pass JEMALLOC_LIBRARY JEMALLOC_INCLUDE_DIR to my cmake, otherwise it complains
<weilewei> -- Could NOT find Jemalloc (missing: JEMALLOC_LIBRARY JEMALLOC_INCLUDE_DIR) CMake Error at /gpfs/alpine/proj-shared/cph102/weile/dev/install/hpx_Debug/lib64/cmake/HPX/HPX_Message.cmake:48 (message): HPX_WITH_MALLOC was set to JEMALLOC, but JEMALLOC could not be found. Valid options for HPX_WITH_MALLOC are: system, tcmalloc, jemalloc, mimalloc,
<weilewei> tbbmalloc, and customCall Stack (most recent call first): /gpfs/alpine/proj-shared/cph102/weile/dev/install/hpx_Debug/lib64/cmake/HPX/HPX_SetupAllocator.cmake:63 (hpx_error) /gpfs/alpine/proj-shared/cph102/weile/dev/install/hpx_Debug/lib64/cmake/HPX/HPXConfig.cmake:52 (include) src/parallel/hpx/CMakeLists.txt:12 (find_package)-- Configuring
<weilewei> incomplete, errors occurred!
<weilewei> Although hpx itself builds successfully, and can find jemalloc. Before hpx changes, I do not need to manually pass JEMALLOC_LIBRARY JEMALLOC_INCLUDE_DIR to my dca cmake
<hkaiser> simbergm: ^^
aserio has quit [Ping timeout: 246 seconds]
<simbergm> hkaiser weilewei, urgh, sorry about that
hkaiser has quit [Read error: Connection reset by peer]
hkaiser has joined #ste||ar
<hkaiser> simbergm: np at all
<simbergm> I think easiest right now would be to go back to the commit right before we merged the cmake branch and set HPX_HAVE_VERIFY_LOCKS=OFF
<simbergm> That way we're not blocking you with these problems
<simbergm> Or just go back to 1.3.0 even
<weilewei> simbergm thanks for that, I will try it out
<simbergm> Daint is back online again so we can test and fix the Cuda configuration (again...)
<simbergm> Tomorrow
aserio has joined #ste||ar
diehlpk_work has quit [Remote host closed the connection]
Coldblackice has joined #ste||ar
Coldblackice has quit [Ping timeout: 264 seconds]
aserio has quit [Ping timeout: 265 seconds]
Coldblackice has joined #ste||ar
hkaiser has quit [Quit: bye]
jaafar has quit [Ping timeout: 245 seconds]
aserio has joined #ste||ar
weilewei has quit [Remote host closed the connection]
aserio has quit [Quit: aserio]
weilewei has joined #ste||ar
hkaiser has joined #ste||ar