hkaiser changed the topic of #ste||ar to: STE||AR: Systems Technology, Emergent Parallelism, and Algorithm Research | stellar.cct.lsu.edu | HPX: A cure for performance impaired parallel applications | github.com/STEllAR-GROUP/hpx | Buildbot: http://rostam.cct.lsu.edu/ | Log: http://irclog.cct.lsu.edu/ | GSoD: https://developers.google.com/season-of-docs/
Coldblackice has joined #ste||ar
Coldblackice_ has quit [Ping timeout: 265 seconds]
K-ballo has quit [Quit: K-ballo]
hkaiser has quit [Quit: bye]
Coldblackice has quit [Ping timeout: 265 seconds]
Coldblackice has joined #ste||ar
K-ballo has joined #ste||ar
K-ballo has quit [Quit: K-ballo]
K-ballo has joined #ste||ar
K-ballo has quit [Quit: K-ballo]
hkaiser has joined #ste||ar
weilewei has joined #ste||ar
<weilewei> I have a problem, so HPX_FOUND is correct/passed, and also I do include_directories(${HPX_INCLUDE_DIRS}), but
<weilewei> [ 12%] Building CXX object src/parallel/mpi_concurrency/CMakeFiles/parallel_mpi_concurrency.dir/mpi_concurrency.cpp.oIn file included from /gpfs/alpine/proj-shared/cph102/weile/dev/src/hpx_dca/include/dca/parallel/mpi_concurrency/mpi_concurrency.hpp:25, from
<weilewei> /gpfs/alpine/proj-shared/cph102/weile/dev/src/hpx_dca/src/parallel/mpi_concurrency/mpi_concurrency.cpp:13:/gpfs/alpine/proj-shared/cph102/weile/dev/src/hpx_dca/include/dca/parallel/hpx/hpx.hpp:18:10: fatal error: hpx/hpx.hpp: No such file or directory #include <hpx/hpx.hpp>
<weilewei> The program still cannot find hpx header
<weilewei> I try to print hpx include by doing message("${HPX_INCLUDE_DIRS}") in cmake, which gives me an empty printout
<weilewei> I guess hpx is not included properly?
<zao> I haven't followed the library much lately, but isn't it it pretty much all CMake-style targets nowadays?
<weilewei> @zao so what should I do?
<weilewei> I guess I might go back to old hpx version
<zao> There might be some flag to tune when building HPX, or using the pkgconfig files maybe?
<zao> Maybe look at the generated cmake/ files and see if it seems to set the variables you want.
<zao> And maybe look at where those are generated in the HPX build?
<zao> How do detect HPX in your project, and what version/branch is HPX?
<weilewei> I am using Nov 11 commit cf909939a1d2de4768d332b03670104908d215d8
<weilewei> Is it documented anywhere? Like, how to detect hpx, link newest hpx
<hkaiser> just find_package(HPX)
<weilewei> hkaiser by doing this, so I can freely include hpx files in my program?
<hkaiser> additionally you need a target_link_libraries(your_target PRIVATE hpx [hpx_init]) (hpx_init only for executables)
<weilewei> ok, let me try to link my target using your method
<jaafar> If a shared_future becomes ready, will all tasks that depend on it become eligible for execution, or just the first one?
<jaafar> or "a" first one
<weilewei> hkaiser thanks, it works now. I have another error to deal with
<weilewei> lol
<hkaiser> jaafar: all of them
<jaafar> hkaiser: thanks! What I'm seeing is that tasks with no dependencies are favored over those with dependencies that have been met
<jaafar> does that sound right?
<jaafar> as in, task A depends on one or two futures... task B, defined after task A, has no dependencies. Neither has been run yet
<jaafar> task A's dependencies are supplied
<jaafar> task B runs first anyway
<hkaiser> jaafar: the scheduler runs things on a FIFO basis
<jaafar> hkaiser: so you would expect task A to run first?
<jaafar> or B
<jaafar> I don't have the right mental model I think :)
<jaafar> exclusive_scan has a bunch of tasks with no dependencies (the "f1" tasks)
<jaafar> then a bunch with dependencies (f2/f3)
<jaafar> and I'm seeing that all the f1's get executed first regardless of whether there are any f2/3 available with dependencies met
<jaafar> that despite the f2/f3 being defined (via a call to dataflow) intermingled with the f1's
<jaafar> so I guess I'm not understanding whether FIFO means "first defined" or "first to have all dependencies met"
<jaafar> If tasks only enter the scheduler's queue *after* their dependencies are met this makes sense
<jaafar> because all the independent tasks enter immediately as they are defined
<jaafar> and the ones with dependencies enter after they become ready
<jaafar> That would match my observation
<hkaiser> yes, that's what's probably happening
<hkaiser> tasks end up in the queue only after their dependencies are met
<jaafar> Ah....
<jaafar> OK I'll have a nice new pictures for the bug report then :)
<hkaiser> :D
<hkaiser> perfect!
<jaafar> I think I can explain what's happening
<jaafar> hkaiser: am I right in thinking that "sync" dataflows get exempted from the queueing thanks to executing directly after the task that completed their dependencies?
<jaafar> i.e. they run immediately and don't pass through the scheduler
<hkaiser> jaafar: yes, sync prevents dataflow from creating a new task
<hkaiser> it instead executes it directly
Coldblackice_ has joined #ste||ar
<jaafar> that indicates a possible workaround...
<hkaiser> jaafar: well, let's identify the problem first
<hkaiser> there might be a 'proper' fix
Coldblackice has quit [Ping timeout: 265 seconds]
<jaafar> OK :) I'll make that picture
K-ballo has joined #ste||ar
<jaafar> hkaiser: I put up a couple of images - one from mostly unmodified code, the other with an interesting experiment