K-ballo changed the topic of #ste||ar to: STE||AR: Systems Technology, Emergent Parallelism, and Algorithm Research | stellar.cct.lsu.edu | HPX: A cure for performance impaired parallel applications | github.com/STEllAR-GROUP/hpx | Buildbot: http://rostam.cct.lsu.edu/ | Log: http://irclog.cct.lsu.edu/
parsa has quit [Ping timeout: 276 seconds]
parsa has joined #ste||ar
nanmiao has joined #ste||ar
K-ballo has quit [Quit: K-ballo]
nanmiao has quit [Quit: Connection closed]
hkaiser has quit [Quit: bye]
diehlpk_work has quit [Remote host closed the connection]
K-ballo has joined #ste||ar
hkaiser has joined #ste||ar
hkaiser has quit [Quit: bye]
diehlpk_work has joined #ste||ar
hkaiser has joined #ste||ar
<gnikunj[m]> hkaiser: meeting?
<pedro_barbosa[m]> diehlpk_work: Is it possible to use HPXCL tyo dynamically allocate shared memory?
nanmiao has joined #ste||ar
<pedro_barbosa[m]> * diehlpk_work: Is it possible to use HPXCL to dynamically allocate shared memory?
<hkaiser> pedro_barbosa[m]: not sure I understand the question, could you elaborate, please?
<pedro_barbosa[m]> I was trying to declare a variable like this on my gpu kernel
<pedro_barbosa[m]> extern __shared__ float internal[];
<pedro_barbosa[m]> But in order to do so I need to declare the memory value when I run the kernel, normally it would be like my_kerbel<<<grid, block, memory>>>
<pedro_barbosa[m]> * I was trying to declare a variable like this on my gpu kernel
<pedro_barbosa[m]> extern __shared__ float internal[];
<pedro_barbosa[m]> * But in order to do so I need to declare the memory value when I run the kernel, normally it would be like my_kernel<<<grid, block, memory>>>
<hkaiser> hpxcl is an infrastructure to remotely launch kernels
<hkaiser> whatever you do in those kernels is something you have to decide
<srinivasyadav227> hkaiser: is there anyway i can run or check tests on CI when building hpx with gcc 11.1 & c++20?
<srinivasyadav227> i am using this temporary image(https://gist.github.com/srinivasyadav18/a68cd65de0ff3bfa6453da22dc6e8b36)
<srinivasyadav227> for gcc 11.1 (built from existing Dockerfile for stellargroup/build_env but with changes to install and use gcc 11.1).
<diehlpk_work> pedro_barbosa[m], When was this feature introduced?
<diehlpk_work> Does nvrtc support that feature?
<pedro_barbosa[m]> When was it introduced in CUDA?
<pedro_barbosa[m]> This forum thread is basically what I was trying to do
<hkaiser> srinivasyadav227: I think ms[m] just added gcc 11.1 as a tester on the cscs resources
<hkaiser> srinivasyadav227: we can also gove you access to our local cluster here at lsu, there we have it installed as well
nanmiao has quit [Quit: Connection closed]
nanmiao has joined #ste||ar
hkaiser has quit [Quit: bye]
<gonidelis[m]> gnikunj[m]: teodorescu is mentioning our chat on his talk ;)
<gonidelis[m]> in response to an `HPX vs concore` question
<gnikunj[m]> ohh crap, how did I miss his talk. Let me join in xD
<gnikunj[m]> so we finally got him to talk on HPX \o/
<gonidelis[m]> i wouldn't say that it's the perfect advertisiment
<gonidelis[m]> advertisment*
<gonidelis[m]> "limited" is the word he used for HPX
<gnikunj[m]> ohh lol, what did he say about HPX?
<gonidelis[m]> although he argued that he does not want to constrain to a single library rather than propose general task concurrency principals that we should all follow
<gnikunj[m]> here we go: troubles with rvalue reference :D
<gonidelis[m]> i would translate it as "if you are bored to go along those rules of his, just use HPX. we are implementing them for you"
<gnikunj[m]> <gonidelis[m] "although he argued that he does "> we could make HPX that generalized library
mortenbirkelund has joined #ste||ar
hkaiser has joined #ste||ar
hkaiser has quit [Quit: bye]
nanmiao has quit [Quit: Connection closed]
hkaiser has joined #ste||ar
mortenbirkelund has quit [Ping timeout: 260 seconds]