<hkaiser>
jaafar: another thing would be nice, namely to see the real prioritties of the threads run, perhaps using some coloring
<jaafar>
hkaiser: actually I already have the chunk size as a parameter, I'm just not plotting it on anything. Using a fixed size which empirically showed the fastest times
<jaafar>
but I can make a plot
<jaafar>
hkaiser: what's the API for finding out the real priority? thanks
<hkaiser>
so larger chunks don't improve things?
<hkaiser>
sec
<jaafar>
hkaiser: chunk size improves to a point then gets worse
<jaafar>
I'm sure you know my theory about why that is :)
<hkaiser>
yah
<hkaiser>
I start believing you're right
<hkaiser>
hpx::threads::get_thread_priority(hpx::threads_get_self_id()) will give you the priority of the thread that calls the function
<hkaiser>
that is hpx::threads::get_thread_priority(hpx::threads::get_self_id())
<jaafar>
OK great
<hkaiser>
even very large chunks (i.e. 4 chunks for 4 cores) gives you bad perf?
<jaafar>
there's just three possibilities, right? I could use shades of green vs. shades of red
<hkaiser>
yah, its eithernormal or high, low is not used at all - so two possibilities
<jaafar>
hkaiser: so here's one of the funny things, which impacts the chunk size situation
<jaafar>
for very large sizes (I am using 16777216) chunk size has little impact - maybe 10 or 15% max
<jaafar>
for medium to small sizes (my reference is 524288) it has a strong impact
<jaafar>
I can post the numbers
<hkaiser>
ok, that would be nice
<jaafar>
posted! the new graphs will take a little longer
<hkaiser>
jaafar: sure, thanks!
jaafar has quit [Quit: Konversation terminated!]
jaafar has joined #ste||ar
bibek has quit [Quit: Konversation terminated!]
bibek has joined #ste||ar
hkaiser has quit [Quit: bye]
K-ballo has quit [Remote host closed the connection]
<jaafar>
OK this is kind of interesting
<jaafar>
jbjnr: are you around?
<jaafar>
hkaiser asked me to show the thread priorities in my picture as well. Here's the weird thing: they are always 2, i.e. normal
<jaafar>
this despite the use of thread_priority_boost
<jaafar>
After running again with priority "high" instead of boost, I find... that all the stage 3 tasks are priority 5, and all the stage 1 tasks are priority 2.
<jaafar>
And yet... we run the stage 1 tasks preferentially!
Guest65867 has quit [Ping timeout: 276 seconds]
Guest65867 has joined #ste||ar
nikunj97 has joined #ste||ar
Coldblackice_ has joined #ste||ar
coldblackice has quit [Ping timeout: 240 seconds]
Guest65867 has quit [Remote host closed the connection]
Guest65867 has joined #ste||ar
<heller>
jaafar: boost is not high priority
<jaafar>
heller right, this is an experiment hkaiser asked me to run *shrug*
<jaafar>
he said it "will raise the priority of the scheduled task but will not inherit the higher priority to any dependent ones"
<jaafar>
but I don't observe any priority change
weilewei has quit [Remote host closed the connection]
<heller>
Hmm
<heller>
Might be a bug then
rori has joined #ste||ar
mdiers_ has quit [Ping timeout: 240 seconds]
K-ballo has joined #ste||ar
<heller>
K-ballo: btw, I think that with the help of the customization point objects as they are presented in P0433, we are able to significantly reduce compile times
<heller>
my simple async + future test is still under a second to compile on my machine
<K-ballo>
once it's ready I can run a batch of tests on it
<heller>
*nod*
<heller>
and I believe we can get non allocating futures with that
<heller>
at least for the case when we the future gets a sender which is ready
nikunj97 has quit [Ping timeout: 268 seconds]
hkaiser has joined #ste||ar
nikunj has joined #ste||ar
hkaiser has quit [Quit: bye]
hkaiser has joined #ste||ar
aserio has joined #ste||ar
weilewei has joined #ste||ar
aserio has quit [Ping timeout: 250 seconds]
aserio has joined #ste||ar
<hkaiser>
rori, simbergm: yt?
<simbergm>
hkaiser: here
<hkaiser>
simbergm: I see compilation problems on WIndows/master now if I specify jemalloc as the allocator
<hkaiser>
apparently the allocator is not set a a dependency while compiling the modules
<simbergm>
:/
<simbergm>
what problems exactly?
<hkaiser>
if HPX_HAVE_INTERNAL_ALLOCATOR is defined, then the collectives module can't find the jemalloc includes
<rori>
on it!
<hkaiser>
other modules as well
<hkaiser>
thanks rori!
<hkaiser>
rori: the plugins as well, btw
<hkaiser>
the parcelports
<simbergm>
thanks hkaiser for testing :)
<simbergm>
I'm aiming to make an rc next week now that we have the cmake branch in (and after the jemalloc problem is fixed)
<hkaiser>
simbergm: ok, sounds good
<hkaiser>
rori: let me know if you want me to test something
<rori>
you can try adding hpx::allocator as a dependency in libs/allocator_support/CMakeLists.txt
<hkaiser>
so we have hpx_allocator and hpx::allocator as targets now?
<rori>
yep hpx_allocator is the module and hpx::allocator is an imported target build after calling find_package (the :: is the standard for imported targets in CMake)
<hkaiser>
ok
<hkaiser>
confusing *puzzled*
<rori>
yeah, we can also change the name ^^
<hkaiser>
yah, but that fixes it, thanks!
<rori>
Cooooool let me know if anything else is failing !
<weilewei>
I got many questions from the lab, one is "What reading material would you recommend if someone (like me) would like to learn about HPX?", lol, is hpx documentation a good place to start with?
<hkaiser>
weilewei: well, it's a good way to start as there is no better way ;-)
<weilewei>
I agreed
<hkaiser>
IOW, we don't have anything else ;-)
<weilewei>
I might CC that email to you, some of them I am not sure about the answers
<hkaiser>
sure
<hkaiser>
rori: will you create a PR or should I simply push to master (pssst don't tell anybody)?
<hkaiser>
simbergm: the allocator could have been caused by not deleting the cache, I'll get back to that once I have the boost library problem figured out
<hkaiser>
simbergm: here is a comment hinting at a reason: System has been removed when passing at set_property for cmake < 3.11, instead of target_include_directories
<hkaiser>
does that mean that set_property is better for the include directories?
<hkaiser>
heller: yes, saw that - cool
<simbergm>
hkaiser: don't know, sorry
<simbergm>
where is this from?
<simbergm>
ah, our cmake files...
<hkaiser>
yes, SetupHwloc
<simbergm>
some other places too
<simbergm>
I have to admit that sentence doesn't make much snse to me
<hkaiser>
right
<hkaiser>
same here
<hkaiser>
rori might know
<simbergm>
yep, it's her comment
<simbergm>
are you completely blocked by this btw? do things work without jemalloc?
<hkaiser>
simbergm: hpx is fine now, phylanx needs some investigation, still
<hkaiser>
I'll create a PR for HPX later
<hkaiser>
grrr, not quite yet
<simbergm>
ok, thanks
<hkaiser>
all modules compile, but the main library still complains about not being able to find jemalloc
<simbergm>
and sorry again, I should've been more careful with this, at least not merge it on a friday...
<simbergm>
hrm, in what way exactly? doesn't know about the target? or can't find jemalloc itself?
<hkaiser>
can't find the include
<hkaiser>
at compilation time
<hkaiser>
cmake goes through fine
<hkaiser>
looks like hpx does not depend on hpx::allocator
<hkaiser>
yep, doesn't link with jemalloc either
<simbergm>
mmh, might need to just explicitly do target_link_libraries(hpx hpx::allocator) for now
<simbergm>
again, with
<simbergm>
...
<hkaiser>
it has that :/ - no idea
<simbergm>
with newer cmakes we can do less set/get_target_property and things should be smoother
<simbergm>
hmm...
<simbergm>
oh, indeed
<simbergm>
can you see if you get any traces of jemalloc when compiling? does it do at least one of: link to jemalloc or add include directories?
<hkaiser>
hold on, probably my fault
<hkaiser>
but its still weird
<hkaiser>
I use the system allocator and it still complains about not finding jemalloc
<simbergm>
very weird
<simbergm>
wipe it all? or did you already do that?
<hkaiser>
did that
<hkaiser>
simbergm : as said - don't worry too much - I need to investigate and will report what I found
<simbergm>
:/
<simbergm>
I'm a natural-born worrier
<simbergm>
thanks for looking into this
aserio has quit [Quit: aserio]
<hkaiser>
simbergm: config/defines.hpp is not generated anymore
<hkaiser>
so it picked up my old version which caused the jemalloc problems
<hkaiser>
is that intentional?
<simbergm>
uhm, probably not
<simbergm>
but wouldn't things fail to compile without that?
<hkaiser>
they do now ;-)
<hkaiser>
where was this generated? do you remember?
<simbergm>
they do fail?
<simbergm>
no, sorry :/
<simbergm>
main CMakeLists.txt
<hkaiser>
no compiles hmmm
<hkaiser>
probably ended up in a different place now
<simbergm>
hkaiser, I think it's in the config module now
<simbergm>
but you're right, it has moved
<simbergm>
behind the CONFIG_FILES option to hpx_add_module
<hkaiser>
ok, makes sense - I simply had a state version sitting on my disk
<hkaiser>
sorry for this
<simbergm>
hkaiser: np, happy that it's sorted, plus you found a real problem in the process
nikunj has quit [Remote host closed the connection]
aserio has joined #ste||ar
aserio has quit [Client Quit]
rtohid has left #ste||ar ["Konversation terminated!"]
maxwellr96 has joined #ste||ar
weilewei has quit [Remote host closed the connection]
<jaafar>
hkaiser: yt?
<hkaiser>
here
<jaafar>
Any experiments etc. that would be helpful rn?
<jaafar>
I've been looking in detail at the scheduling behavior and it's fairly surprising
<jaafar>
but I don't know how to get further really
<jaafar>
each of those diagonal "lines" in the plot seems to correspond to a CPU core, and they tend to do just one kind of stage, i.e. one core is doing just stage 3 or stage 1
<hkaiser>
jaafar: are there dependencies that enforce that sequential execution?