K-ballo changed the topic of #ste||ar to: STE||AR: Systems Technology, Emergent Parallelism, and Algorithm Research | stellar.cct.lsu.edu | HPX: A cure for performance impaired parallel applications | github.com/STEllAR-GROUP/hpx | Buildbot: http://rostam.cct.lsu.edu/ | Log: http://irclog.cct.lsu.edu/
nanmiao has joined #ste||ar
Gibbs has joined #ste||ar
Gibbs has quit [Quit: Connection closed]
K-ballo has quit [Quit: K-ballo]
hkaiser has quit [Quit: bye]
Coldblackice_ is now known as Coldblackice
nanmiao has quit [Quit: Connection closed]
shubham has joined #ste||ar
shubham has quit [Quit: Connection closed for inactivity]
gdaiss[m] has quit [Quit: Idle for 30+ days]
hkaiser has joined #ste||ar
K-ballo has joined #ste||ar
Vir has quit [Quit: ZNC 1.7.5+deb4 - https://znc.in]
diehlpk_work has quit [Ping timeout: 276 seconds]
Vir has joined #ste||ar
hkaiser has quit [Quit: bye]
hkaiser_ has joined #ste||ar
hkaiser_ has quit [Remote host closed the connection]
hkaiser_ has joined #ste||ar
gdaiss[m] has joined #ste||ar
hkaiser has joined #ste||ar
hkaiser_ has quit [Ping timeout: 276 seconds]
<sestro[m]> Hi, is there an easy way to create a "task-local" variable in HPX? I'm looking for the equivalent to `thread_local` but for HPX tasks. The use case is to have a variable shared between all iterations in a `hpx::for_loop` performed by the same task. Or would I manually create the chunks over the loop index and add the variable at this level?
hkaiser_ has joined #ste||ar
<hkaiser> sestro[m]: there is get/set_thread_data that allows to attach a size_t to a hpx thread
<hkaiser> but this is very low level
<hkaiser> sestro[m]: do you need a variable for some collective operation?
hkaiser_ has quit [Client Quit]
<sestro[m]> hkaiser: I need a sort of a lookup cache, but only for the scope of each individual task, so not shared between tasks.
<hkaiser> would creating the cache on the stack of that task work? or do you need to access it after the task ended as well?
<sestro[m]> No, that should work.
<hkaiser> might be the easiest solution...
<sestro[m]> true, thanks!
<ms[m]> gnikunj: just in case, kokkos meeting started now
diehlpk_work has joined #ste||ar
nanmiao has joined #ste||ar
<sestro[m]> hkaiser: sry, had to run before. regarding the stack allocation: unless I'm missing something here (which most likely is the case) allocating the cache on the stack within the functor passed to `hpx::for_loop` would mean the object is created for each single iteration. this is what I'm trying to avoid.
<sestro[m]> is there a guarantee that a HPX task, once scheduled for execution, cannot jump between OS threads?
<hkaiser> sestro[m]: you said you needed it for every 'thread'
<hkaiser> so you need it for every 'chunk'?
<hkaiser> why not use thread_locals, then?
<hkaiser> and yes, safe_object is doing just that
<sestro[m]> hkaiser: at least for every chunk, sorry for not being clear before. but for every thread also works.
nanmiao has quit [Quit: Connection closed]
<sestro[m]> I incorrectly assumed thread_local wouldn't work as I wasn't aware of the fact that tasks cannot jump between OS threads
<hkaiser> tasks may jump between kernel-threads
<hkaiser> but if it's caching you're intersted in this shouldn't be a problem
<sestro[m]> but the cached entries may be task-specific
<hkaiser> hmmm
<sestro[m]> so I'd like to avoid switching between different caches
nanmiao has joined #ste||ar
<hkaiser> that allows to do things specificly to the chunks
<sestro[m]> ah, okay. will have a look at that, thanks
Coldblackice has quit [Ping timeout: 252 seconds]
hkaiser has quit [Quit: bye]
diehlpk_work has quit [Remote host closed the connection]
nanmiao has quit [Quit: Connection closed]
nanmiao has joined #ste||ar
hkaiser has joined #ste||ar
joe84 has joined #ste||ar
joe84 has quit [Quit: Connection closed]
nanmiao has quit [Quit: Connection closed]
nanmiao has joined #ste||ar
klaus[m] has quit [Ping timeout: 245 seconds]
CynthiaPeter[m] has quit [Ping timeout: 245 seconds]
rachitt_shah[m] has quit [Ping timeout: 245 seconds]
CynthiaPeter[m] has joined #ste||ar
rachitt_shah[m] has joined #ste||ar
klaus[m] has joined #ste||ar
hkaiser has quit [Quit: bye]
nanmiao has quit [Quit: Connection closed]