<hkaiser>
the vtune marking the scheduler as a hot spot is a red herring
<hkaiser>
this is caused by the high idle-rate which is caused by too little parallelism
<mdiers_>
yes, but i tested it with 256 data records divide by 4 tasks, and it was the same
<mdiers_>
i think i have created some behavior that makes the scheduler not work efficiently
<hkaiser>
mdiers_: ok, I will try to look at your code in more detail tonight
<mdiers_>
i have the same behavior on epyc nodes only, on 20 or 32 core intel nodes i do not have it
<hkaiser>
interesting
<mdiers_>
We have enough different hardware to test... The 24 core epyc is my workstation and the 64 core rome is a borrowed test system to be able to estimate later purchases.
<mdiers_>
is there any experience if there is a performance loss at hpx when compiling with 128 core support instead of 64?
<mdiers_>
in my example the workload can of course also be increased. and the parallel::for_loop in the workload i have already provided with static_chunk_size( (n+target.num_pus-1)/target.num_pus)
nikunj has quit [Ping timeout: 258 seconds]
nikunj has joined #ste||ar
Abhishek09 has joined #ste||ar
<mdiers_>
hkaiser: i will continue tomorrow, now i also suspect the memory accesses. thanks a lot for your invested time. ;-)
<Abhishek09>
hkaiser: have you contacted rod ?
<hkaiser>
Abhishek09: not yet, thanks for reminding me
rtohid has joined #ste||ar
<rtohid>
@Abhishek09, I am here, how can I help you?
<Abhishek09>
rtohid: happy to see u
<rtohid>
thanks, did you get my email yesterday?
<Abhishek09>
I want to discuss about on your project "Providing pip package for phylanx"
<Abhishek09>
rtohid: Yes,but i also reply
<Abhishek09>
I think you have to use manylinux
<Abhishek09>
because pypa doesn't accept it without manylinux
<rtohid>
what is it, and why do you need it? could you give a brief description, please?
<Abhishek09>
if we do without, it will be rejected , bad pypa citizens
<Abhishek09>
Your way that we discuss, it will work but not accepted
<rtohid>
yes, but that's the distribution issue, isn't it?
<Abhishek09>
Therefore, we have to use manylinux or sdist will be possible
<Abhishek09>
but i will prefer wheel
<rtohid>
let's start with packaging and not worry about distribution for now.
<Abhishek09>
Why ? if not our work will be wasted
<Abhishek09>
we have plan everything before making any decision
<rtohid>
I would start with building HPX and Phylanx first.
<Abhishek09>
so that we always towards direction of achieving our goals
<rtohid>
For us, I guess, the goal is to package Phylanx. What is your goal?
<Abhishek09>
to make phlanx pip installlable
<rtohid>
cool, so let's start by building it first.
<Abhishek09>
in same way that we discuss earlier
<Abhishek09>
But how you deal with wheel for releasing on pypa?
<rtohid>
I would start from a docker image and try to figure out Phylanx's software dependencies.
<rtohid>
the goal for now is not distributing the package. Let's do this step by step.
<Abhishek09>
hpx,jemelloc,gcc, boost,pybind11,
<Abhishek09>
blaze
<rtohid>
great! can you please create a docker image with all these installed?
<Abhishek09>
docker image or wheel file?
<rtohid>
docker image. first we'd want to manually build the library and then we can automate the process step by step.
<Abhishek09>
What your plan ? can you expain , So that i can research whether it is efficient or not
<rtohid>
start with manual build, automate the process a few steps at a time and, eventually, package it.
<Abhishek09>
is this project is 3 month project or shorter?
<rtohid>
I am not sure about the time.
<rtohid>
sorry, I have to go to a meeting now. Please feel free to email me if you had any questions.
rtohid has left #ste||ar ["Konversation terminated!"]
hkaiser has quit [Quit: bye]
<Abhishek09>
what is the dnf package of phylanx name?
Abhishek09 has quit [Remote host closed the connection]
hkaiser has joined #ste||ar
<diehlpk_work>
hkaiser, We have scaling results for hpx + apex, hpx + apex + performance counters, hpx standalone, and hpx + performance counters + apex for one up to 64 nodes
<diehlpk_work>
Sagiv will compute the sub grids per sec and we can compare the overhead using all of them
<hkaiser>
diehlpk_work: very nice! thanks!
<diehlpk_work>
hkaiser, Did you have time to read the CIC documet?
<hkaiser>
diehlpk_work: I read it over but had a hard time to come up with anything in addition