<LiliumAtratum>
complex versions of that function.
<LiliumAtratum>
oh... nvm. Old doc for hpx 1.0 seems to have an accurate include provided. Sorry for troubling you. Still, the current documentation could be more verbose in that aspect ;)
<heller1>
LiliumAtratum: indeed. The module API reference has better include information ;)
<heller1>
but I guess those runtime specific functions haven't been modularized yet
<LiliumAtratum>
Since I am here already, I have a more general question. I have several independent top-level tasks for hpx to crunch. I could just launch all those tasks in a loop and wait for all hpx::futures at the end. However, I expect I will run out of main memory before the hpx thread pool is saturated. When things start to swap on the disk - the
<LiliumAtratum>
performance will deteriorate. Is there an idiomatic way to tell the scheduler not to launch too many of those top-level tasks at once?
<LiliumAtratum>
Currently I am thinking about just putting some counting semaphore at the beginning of each of those top-level task. But maybe there is a better way?
<heller1>
LiliumAtratum: there is the limiting_executor which does exactly that
<zao>
Yorlik: Nice, isn’t it?
<jbjnr>
LiliumAtratum: please note that I have some changes to limiting executor that I have not made into a PR yet, so please feel free to comment on features you need.
<jbjnr>
PS. Who is LiliumAtratum ?
<LiliumAtratum>
Just another developer from across the Globe who wants to incorporate hpx in their project ;)
<LiliumAtratum>
but I am a "noob" user at the moment.
<heller1>
we all started as beginners eventually ;)
<LiliumAtratum>
alas, `limiting_executor` gives me no hits in hpx 1.4.1 doc
<jbjnr>
I've made improvements to it, but they predate the modularization (files changed location), so it needs to be updated and merged back to master
<jbjnr>
basic idea is use the executor to say - when tasks_in_flight>N stop launching and when they fallbelow M, start launching again. So typically something like N=2000 and M=1000 or somethig, but if you have very memory intensive tasks, smaller numbers
<jbjnr>
new version has better waiting/blocking with less intrusive spin overhead and better blocking on destruction or waiting till tasks drain
<LiliumAtratum>
oh, in my case it will be probably N=4, M=1 ;)
<heller1>
LiliumAtratum: would be cool if you could share some details about your project! It's always nice to have a little overview of our use cases
<LiliumAtratum>
Global point cloud registration. Matching point clouds of the same building obtained from different locations.
<LiliumAtratum>
With as little user input as possible. Ideally, fully automatic.
<LiliumAtratum>
Running on a single high-end PC. No cluster computing and such. But still, we believe hpx may help.
<heller1>
sounds like a cool project!
<heller1>
one last question: academia or industry?
<LiliumAtratum>
Industry
<heller1>
welcome aboard ;)
<LiliumAtratum>
the early version is already being used in-house :)
<heller1>
nice
<jbjnr>
Are you academic or industry? LiliumAtratum - (it is helpful for us to know who are users are)
<LiliumAtratum>
I used to work in academia but I work in industry right now. Still, our project requires a good deal of research, so that we know what we are writing in our program :)
mcopik has joined #ste||ar
mcopik has quit [Client Quit]
<heller1>
ms[m]: btw, sanitizer PR created
LiliumAtratum has quit [Remote host closed the connection]
<heller1>
tag_invoke should be clean as well now...
<ms[m]>
heller: thank you!
<heller1>
the sanitizer PR needs some more work ...
<heller1>
is daint up and running again?
<jbjnr>
no. cscs systems locked out due to ongoing cyber attack
karame_ has quit [Remote host closed the connection]
sayefsakin has quit [Read error: Connection reset by peer]
weilewei has quit [Remote host closed the connection]
sayefsakin has joined #ste||ar
LiliumAtratum has joined #ste||ar
sayefsakin has quit [Ping timeout: 260 seconds]
LiliumAtratum has quit [Ping timeout: 245 seconds]
hkaiser has joined #ste||ar
LiliumAtratum has joined #ste||ar
<heller1>
ms[m]: #pragma once will take me a while to adapt...
<Yorlik>
heller1: You don't believe the build time I posted?
<heller1>
Yorlik: very hard to believe! After all, I think with all tests, you have more than 1000 targets. libhpx has around 400
<Yorlik>
I just measured again: With an already downloaded hpx: From hitting F7 to "Build all succeeded": 2'09" incl. Tests but not Examples.
<heller1>
well, good for you then ;)
<Yorlik>
Team Red gave me a nice and warm welcome :D
hkaiser has quit [Read error: Connection reset by peer]
hkaiser has joined #ste||ar
nikunj97 has joined #ste||ar
<ms[m]>
heller: you'll get used to it ;)
<ms[m]>
Yorlik: I don't believe you either
<heller1>
Yorlik: I doubt that you actually build all the HPX tests
<ms[m]>
it takes at least 15 minutes for us to build tests and examples on a dual 18 core xeon system (examples don't add that much time)
<ms[m]>
I don't think the (pseudo)targets work very well on windows
<Yorlik>
What was weird, was there were only like near to 400 targets when I checked again - I wonder if it was really building the tests
<ms[m]>
that's roughly what the core libraries contain
<Yorlik>
then it skipped the tests - I'll double check my settings.
<Yorlik>
I just do this: -DHPX_WITH_TESTS=ON
<ms[m]>
linux or windows? what target did you build?
<Yorlik>
Oh crap - found a baug
<Yorlik>
bug
<Yorlik>
theres a second time where its unsetting the flag ... dammit
<ms[m]>
:P
<ms[m]>
how nice it would be to build all tests in two minutes...
<Yorlik>
measuring again ...
<Yorlik>
Still I'm happy with that machine - boost with vcpkg built in ~13 minutes.
<Yorlik>
Curious what will come out of this test.
<Yorlik>
Do I have to switch on all tests manually? Because it was crazy fast again and only a bit over 400 targets
Nikunj__ has joined #ste||ar
nikunj97 has quit [Ping timeout: 244 seconds]
<hkaiser>
ms[m]: pls feel free to use the PMC meeting link for the Kokos meeting before that
<ms[m]>
Yorlik: what target are you building?
<Yorlik>
All
<ms[m]>
hkaiser: thanks! I think we don't have much to discuss though...
<hkaiser>
k
<ms[m]>
gdaiss: ^
<Yorlik>
But - obviously not - so I made an error somewhere
<Yorlik>
Obviously the tests do not get built
<Yorlik>
I'm setting -DHPX_WITH_TESTS=ON and all these variables: -DHPX_UTIL_WITH_TESTS=${HPX_ALL_TESTS} with HPX_ALL_TESTS = ON
<ms[m]>
Yorlik: they should even be on by default...
<ms[m]>
what does ccmake say?
<Yorlik>
Let me prepare some output for you ...
<ms[m]>
also, the tests aren't part of all I think
<rori>
I think you also have to specify the `HPX_WITH_TESTS_UNIT` `HPX_WITH_TESTS_REGRESSION` for them to be enabled
<Yorlik>
rori: I'll add that and try - thanks !
<rori>
REGRESSIONS*
<Yorlik>
kk
<Yorlik>
I added both - still only 406 targets - I'll dig the docs a bit
<ms[m]>
Yorlik: all of those should be on by default (also the modules unit tests, HPX_WITH_TESTS guards all of them but you shouldn't need to enable them explicitly)
<heller1>
hkaiser: ms[m]: #4305 should be good now!
<ms[m]>
try a clean build not setting any of the tests related options
<Yorlik>
Allright - I'll do
LiliumAtratum has quit [Remote host closed the connection]
<ms[m]>
heller: thanks! let's wait until daint is back up though before merging anything new
<Yorlik>
All build directories were deleted - it was a clean build
<ms[m]>
windows or linux?
<Yorlik>
Windows
nan11 has joined #ste||ar
<Yorlik>
ms[m]: I added the cmake output to the gist
<ms[m]>
that's why... tests is a pseudotarget and pseudotargets are a bit messed up on windows (meaning they don't work)
<Yorlik>
Fix it?
<hkaiser>
jbjnr, ms[m]: meeting
<ms[m]>
yep
<Yorlik>
ms[m] I think it must have worked in between - because I added the test var for a reason - to not build the tests.
LiliumAtratum has joined #ste||ar
<LiliumAtratum>
Hello... I am coming with a new question. Any idea why I may be getting this linker error? `hpx.lib(hpx.dll) : error LNK2005: "public: static class hpx::components::detail::wrapper_heap_list<class hpx::components::detail::fixed_wrapper_heap<class hpx::components::managed_component<class hpx::lcos::detail::promise_lco<void,struct
<LiliumAtratum>
$promise_lco@XUunused_type@util@hpx@@@detail@lcos@hpx@@Uthis_type@2components@4@@components@hpx@@@detail@components@hpx@@SAAEAV?$wrapper_heap_list@V?$fixed_wrapper_heap@V?$managed_component@V?$promise_lco@XUunused_type@util@hpx@@@detail@lcos@hpx@@Uthis_type@2components@4@@components@hpx@@@detail@components@hpx@@@234@XZ) already defined in
<LiliumAtratum>
main.cpp.obj`
<LiliumAtratum>
I narrowed it down to `hpx::cout << hpx::flush;`. Comment out that line and the linker works. But it feels like a small side effect of me doing something wrong elsewhere.
<hkaiser>
LiliumAtratum: I'm in a meeting will get back later
<LiliumAtratum>
see you then!
nan1194 has joined #ste||ar
nan1194 has quit [Remote host closed the connection]
nan11 has quit [Ping timeout: 245 seconds]
weilewei has joined #ste||ar
nan111 has joined #ste||ar
karame_ has joined #ste||ar
<Yorlik>
ms[m] I have issue after linking HPX against the vcpkg debug version of boost. When using Boost from vcpkg the header directory is the same than with release, but the libraries are in the /debug subdirectory. I can pass that as BOOST_ROOT, but then I have to pass the include directory separately as Boost_INCLUDE_DIR. When done so - and everything compiles nicely, when using this Debug-HPX in debug mode I get an
<Yorlik>
error, that the current BOOS_ROOT is different from the one HPX was compiled with. It looks, as if HPX is ignoring my previous setting from its build and just linking against the non debug version of Boost when I pass the Boost_INCLUDE_DIR Variable.
<Yorlik>
HPX Claims, its BOOST_ROOT was the non debug version
<Yorlik>
I wonder if I should set BOOST_ROOT always to the vcpkg ROOT/installed/.../ and set the Boost_Library_DIR Instead
nikunj has quit [Read error: Connection reset by peer]
nikunj has joined #ste||ar
rtohid has joined #ste||ar
<hkaiser>
Yorlik: just use -DCMAKE_TOOLCHAIN_FILE=.../vcpkg/scripts/buildsystems/vcpkg.cmake
<hkaiser>
it will detect everything properly
<Yorlik>
OK
<hkaiser>
no need for BOOST_ROOT alltogether
<Yorlik>
I', reconfiguring our build system in the moment to optionally use vcpkg or our homebrewn builds
<Yorlik>
Some of the vcpkg builds do not work for several reasons, e.g. Lua
<Yorlik>
They do not build Lua by default as C++ for example
nikunj has quit [Ping timeout: 258 seconds]
nikunj has joined #ste||ar
<Yorlik>
hkaiser: fixed
<Yorlik>
Thanks!
LiliumAtratum has quit [Ping timeout: 245 seconds]
<heller1>
LiliumAtratum: you need to link against the iostreams_component library
<Yorlik>
hkaiser: YT?
<Yorlik>
I am running my app on 12 cores, 12 threads (.hpx.ini) now and am getting these affinity masks. Is this supposed to be like that?
<Yorlik>
worker-thread#00 0000000000000001
<Yorlik>
worker-thread#01 0000000000000100
<Yorlik>
worker-thread#04 0000000100000000
<Yorlik>
worker-thread#02 0000000000010000
<Yorlik>
worker-thread#03 0000000001000000
<Yorlik>
worker-thread#05 0000010000000000
<Yorlik>
worker-thread#06 0001000000000000
<Yorlik>
worker-thread#07 0100000000000000
<Yorlik>
worker-thread#08 0000000000000000
<Yorlik>
worker-thread#09 0000000000000000
<Yorlik>
worker-thread#10 0000000000000000
<Yorlik>
worker-thread#11 0000000000000000
LiliumAtratum has joined #ste||ar
<zao>
Are you printing those right?
<zao>
Yorlik: HPX will by default go onto 12 of my 24 hyperthreads, I assume it's every other of them.
<Yorlik>
I just took that from the CS thread view when stoping the debug
<hkaiser>
zao: we'll see - so far it was mightly braindead
<LiliumAtratum>
@zao This is indeed good news for me :)
karame_ has quit [Remote host closed the connection]
<heller1>
nvcc is barely a compiler, more like a preprocessor
<LiliumAtratum>
hkaiser Formally nvcc itself is not a compiler, but a toolkit that transfers various compilation tasks elsewhere. For example CPU code is transferred to other C++ compiler (clang, msvc, etc...) but device code is processed by something underneath provided with the CUDA toolkit.
<LiliumAtratum>
and that "something undernath for device code" was stuck to C++14 for quite some time...
<LiliumAtratum>
I am glad to hear they move it forward
<heller1>
Transpiler!
<zao>
When in doubt, blame/praise wash :)
<hkaiser>
heller1: it will still produce that C goobledegook
<LiliumAtratum>
heller1 I was sceptical, how linking against something can help resolve a symbol that is being defined too many times already. However, with your hint, I properly included `HPX::iostreams_component` in every lib I need and the problem is gone. Thank you!
<heller1>
Yeah...
nan111 has quit [Remote host closed the connection]
<heller1>
Oh, I only read hpx::cout and linker error... Completely overlooked the 'already defined' part
<heller1>
Do you happen to have a a small reproducible example for that?
<LiliumAtratum>
No, I have a humongous app for that ;)
<LiliumAtratum>
split into few libs, using cmake and many dependencies
<heller1>
That doesn't help :p
<LiliumAtratum>
anyway, the problem is gone now, since I properly set up the linking
<heller1>
Did you have the iostreams_component as a target_link_library of at least one of your libs?
<LiliumAtratum>
I had it only in 1. Now I have it everywhere, where I use it. In PRIVATE section
karame_ has joined #ste||ar
<heller1>
Interesting, this might have messed it up
<heller1>
I'm not that familiar with the msvc linker to be if any real help here though
rtohid has quit [Remote host closed the connection]
LiliumAtratum has quit [Remote host closed the connection]
rtohid has joined #ste||ar
<zao>
I just realized that if I put HPX in a VR application any scheduler hiccups could cause the user to throw up. This sounds like a noble goal.
<Yorlik>
heller1: I'm currently doing measurements of frametime versus object count and cores used - gotta see how the USL charts are going to look when it's done :)
<heller1>
Looking forward to see the results!
<heller1>
zao: sounds like a good goal
weilewei has quit [Remote host closed the connection]
weilewei has joined #ste||ar
nan111 has joined #ste||ar
<diehlpk_work_>
hkaiser, The rotating star works in Debug with and without One GPU