hkaiser changed the topic of #ste||ar to: STE||AR: Systems Technology, Emergent Parallelism, and Algorithm Research | stellar.cct.lsu.edu | HPX: A cure for performance impaired parallel applications | github.com/STEllAR-GROUP/hpx | Buildbot: http://rostam.cct.lsu.edu/ | Log: http://irclog.cct.lsu.edu/ | GSoC: https://github.com/STEllAR-GROUP/hpx/wiki/Google-Summer-of-Code-%28GSoC%29-2020
<zao> If you had an interactive shell, you could always run screen or tmux :)
<hkaiser> you still should be able to connect to the pid using gdb, even if you have separate ssh sessions
<zao> (assuming they were installed)
mdiers_1 has joined #ste||ar
mdiers_ has quit [Ping timeout: 240 seconds]
mdiers_1 is now known as mdiers_
hkaiser has quit [Quit: bye]
bita_ has joined #ste||ar
bita_ has quit [Client Quit]
Yorlik_ has joined #ste||ar
Yorlik has quit [*.net *.split]
nikunj has quit [Remote host closed the connection]
nikunj has joined #ste||ar
weilewei has quit [*.net *.split]
weilewei has joined #ste||ar
weilewei has quit [Ping timeout: 240 seconds]
nikunj has quit [Remote host closed the connection]
nikunj has joined #ste||ar
gonidelis has joined #ste||ar
iti has joined #ste||ar
<iti> Hello again, I was trying to build hpx with the instructions provided by simbergm and I cannot figure out how to rectify this error or why is it showing show . While building terminal show error "CMake Error at cmake/HPX_Message.cmake:48 (message):
<iti> in_tree This project requires an out-of-source-tree build. See README.rst.
<iti> Call Stack (most recent call first):
<iti> Clean your CMake cache and CMakeFiles if this message persists.
<iti> cmake/HPX_ForceOutOfTreeBuild.cmake:12 (hpx_error)
<iti> CMakeLists.txt:121 (hpx_force_out_of_tree_build)
<iti> "
iti has quit [Quit: Leaving]
gonidelis has quit [Remote host closed the connection]
Kairav has joined #ste||ar
Kairav has quit [Remote host closed the connection]
nikunj97 has joined #ste||ar
<nikunj97> Can someone tell me why I'm getting this error? https://pastebin.com/v92C5r4w
<nikunj97> 1>C:\Users\Nikunj\source\repos\SIMD Programming\2d stencil\include\generic_simd.hpp(77,1): error C2440: 'return': cannot convert from 'simd::vec<simd::float4>' to 'simd::vec<simd::float4>
<nikunj97> 1>C:\Users\Nikunj\source\repos\SIMD Programming\2d stencil\include\generic_simd.hpp(77,1): message : Constructor for class 'simd::vec<simd::float4>' is declared 'explicit'
<nikunj97> why can it not work with constructors as explicit?
<nikunj97> I still don't see any trial of implicit conversions
<heller1> Copy ctors are almost always implicitly called
<heller1> Especially when returning from functions
<nikunj97> heller1, didn't know that
<nikunj97> so basically always keep copy ctors implicit
<heller1> Yes
<nikunj97> heller1, gotcha. Thanks!
<heller1> There's no reason to have it otherwise
nikunj has quit [Read error: Connection reset by peer]
nikunj has joined #ste||ar
nikunj has quit [Remote host closed the connection]
nikunj has joined #ste||ar
hkaiser has joined #ste||ar
<zao> nikunj97: Very brave to have a source folder with a space in it :D
<nikunj97> zao, hahaha, i believe in trying new things ;)
Pranavug has joined #ste||ar
pranav_ has joined #ste||ar
Pranavug has quit [Remote host closed the connection]
pranav_ has quit [Quit: Leaving]
Pranavug has joined #ste||ar
Pranavug has quit [Client Quit]
Pranavug has joined #ste||ar
<nikunj97> hkaiser, yt?
Pranavug has left #ste||ar [#ste||ar]
Pranavug has joined #ste||ar
<nikunj97> hkaiser, how is the arm project doing?
<nikunj97> btw I'm writing the 2d stencil application from scratch coz it was getting harder for me to add Virtual Node Grid concept on heller1's code
<nikunj97> I'll be writing the code such that we make use of most simd cores along with thread level parallelism
<nikunj97> also, SVE allows for 2048bits of SIMD register so I believe that a code utilizing those simd units can get a lot on scaling (provided we don't run in I/O bottlenecks as you mentioned earlier)
<hkaiser> nikunj97: well, sure
<hkaiser> SIMD is 100% orthogonal to parallelism, however
<nikunj97> hkaiser, I believe simd isn't exactly orthogonal
<nikunj97> I mean GPU's are essentially simd units, right?
<hkaiser> those are SIMT
<hkaiser> so it's not really the same
<nikunj97> the way simd extracts parallelism is different to the way hpx does
Pranavug has quit [Quit: Leaving]
<nikunj97> so in that regard, I do understand what you're saying
<hkaiser> SIMD is executed by a single thread of execution
<hkaiser> GPUs have thousands of threads that do the same
<nikunj97> yes
<nikunj97> well, hpx does schedule a task on a thread. But the execution time of that task can be further reduced by utilizing simd instruction set
<hkaiser> each HPX thread can have its own SIMD instructions, and you can do SIMD with or without parallelism
<hkaiser> sure
<nikunj97> yes, agreed
<nikunj97> so it's not exactly orthogonal then, is it? It's essentially utilizing the simd registers and simd compute units within the thread
<nikunj97> so it's actually within thread parallelism kinda
<nikunj97> I don't know what the right word would be
<nikunj97> but I learnt simd programming coz of this, so I find it cool ;)
<nikunj97> that's why I wanted to know if you guys have started benchmarking on ARM64FX
<nikunj97> bcoz if you haven't then I can provide the one I will write (it will essentially have heller1's code inspiration along with simd addition)
<hkaiser> sure
<hkaiser> nikunj: we have not done benchmarking, AFAIK
<hkaiser> nikunj97: ^^
<nikunj97> hkaiser, noted
<nikunj97> hkaiser, btw it seems that you forgot to add my name to the mentor list
<hkaiser> uhh
<hkaiser> you should have received an email to self-register
<nikunj97> not about that
<nikunj97> but about adding my name in the wiki
gonidelis has joined #ste||ar
<nikunj97> I emailed you the required, you told me in our last call that you'll do it
<hkaiser> ahh, the wiki
<hkaiser> right - I'm sorry
<nikunj97> no worries
weilewei has joined #ste||ar
<weilewei> hkaiser it turns out the it might not be easy to use pid as well. every time I get an interactive node, this node cannot run any compute tasks, this node launch tasks in a specific remote node via jsrun. everytime I grab an interactive node, it has different connection to different specific nodes. If two nodes are different, then pid does not work
<weilewei> That's Summit specific settings... I hope I express myself clear...
nikunj97 has quit [Quit: Leaving]
nikunj has quit [Remote host closed the connection]
nikunj has joined #ste||ar
nikunj has quit [Remote host closed the connection]
nikunj has joined #ste||ar
<hkaiser> weilewei: understood
<weilewei> hkaiser great
<hkaiser> how do you connect to a running app with totalview (or whatever debugger there is on summit)?
<hkaiser> weilewei: all I know is that you are connecting using ssh to the compute node, and ssh allows to do these things, as zao said, possibly using tmux or similar
<weilewei> arm-forge. Basically, I download a client on my local machine and remote access to Summit. On summit, module load forge, then `ddt --connect jsrun -.. ./myApp`, then they connects
<zao> heh, I was going to ask if you had DDT
<hkaiser> so you should be able to tell ddt to what process to connect
<hkaiser> I'm not a linux guy, others might be able to help more
<weilewei> ddt cannot read debugging information
<weilewei> zao ddt can connect to an MPI application without NVLink, but for the one with NVLink(GPUDirect), then it cannot read debugging info
<hkaiser> weilewei: can't you debug things without gpudirect?
<hkaiser> it's just mpi calls after all
<weilewei> hkaiser hmmm, I see what you mean. Then I need to memcpy device to host or H2D, and then use mpi send and recv
<hkaiser> could be an option if everything else fails
<weilewei> hkaiser ok, let me try it out to see how it goes
jbjnr has joined #ste||ar
<jbjnr> So we are on lockdown now. All shops closed apart from food and pharmacies until the end of the month. I hope we are not banned from leaving the house like in italy.
<hkaiser> jbjnr: doh
<hkaiser> hope all will be well
<jbjnr> Well statistically speaking we're likely to catch it, fingers crossed we're not the unlucky ones. Olga's had a cough for a few days and a sore throat, so that's two symptoms.
<jbjnr> I finally managed to get riot to connect to irc again. I wish it would just stay connected when I shut things down and restart...
<heller1> Crossing fingers for you! We're not as locked down yet. I'm in home office and schools are closed
<heller1> So we just got everyone lock themselves down by their own...
<heller1> Here's a breakdown of the numbers
<heller1> Hope you all stay safe!
<zao> Good luck with that, when my uni still insists that home work is only for those that are sick.
<jbjnr> We've been at home for a week and a half, but schools closed a few days back and now the shops shutting really means it's going to be quiet. Glad I bought lots of bicycle repair stuff to keep me busy over the coming days. I can at least go out and get exercise etc.
gonidelis56 has joined #ste||ar
gonidelis has quit [Ping timeout: 240 seconds]
gonidelis56 is now known as gonidelis
<gonidelis> Same here in Greece. Unis closes, shops closed. Guess I 'll make a good GSoC run closed in home.
<zao> We're still _teaching_ in meatspace...
<heller1> ugh
<heller1> This is the who site tracking the official counts
<weilewei> This tracking map is also good: https://coronavirus.jhu.edu/map.html
<hkaiser> gonidelis: did you look at the algorithms yet?
gonidelis has quit [Ping timeout: 240 seconds]