hkaiser changed the topic of #ste||ar to: STE||AR: Systems Technology, Emergent Parallelism, and Algorithm Research | stellar-group.org | HPX: A cure for performance impaired parallel applications | github.com/STEllAR-GROUP/hpx | This channel is logged: irclog.cct.lsu.edu
<hkaiser> gnikunj[m]: will do
<gonidelis[m]> gnikunj: what paper ?
<gnikunj[m]> I had to write one for SC (resiliency stuff) but didn't due to time restrictions :/
<gnikunj[m]> I'll write down that one now for Europar or IPDPS
<hkaiser> gnikunj[m]: done, might take a day, however
<gnikunj[m]> hkaiser: Thanks! I'll start reviewing the work I've done in the meantime.
nanmiao has joined #ste||ar
diehlpk_work has quit [Remote host closed the connection]
diehlpk has joined #ste||ar
hkaiser has quit [Quit: Bye!]
diehlpk has quit [Quit: Leaving.]
nanmiao has quit [Quit: Client closed]
K-ballo1 has joined #ste||ar
K-ballo has quit [Ping timeout: 256 seconds]
K-ballo1 is now known as K-ballo
jehelset has quit [Ping timeout: 240 seconds]
<zao> I've got a piece of personal software for which HPX might be the least horrible choice for task infrastructure. I'm not sure if I dare try to use it :D
<zao> Currently trying to understand and leverage TBB, heh.
<zao> Biggest concern I have is how well I can interact with HPX tasks to/from native threads, as I've got several parts that need to interact with the real world.
<zao> I'm gonna have to make costly blocking queries into sqlite3 and leveldb, do long-running downloads with curl, and honoring RPC calls from clients over gRPC, Cap'n Proto or some other fabric like WebSockets, as well as a bit of a management UI with Win32 windowing.
<zao> Not sure if all this renders HPX pretty much unusable and I'm better off trying to remember how to write Rust again.
<zao> At least it's not distributed, so HPXlocal would do, if it had been on vcpkg.
jehelset has joined #ste||ar
<ms[m]> zao: if all you're looking to do is that kind of blocking io stuff then hpx is probably not the right choice
<ms[m]> but if you're looking for both that and the rest of the lightweight tasking then it's more interesting
<ms[m]> there are the service pools (with configuration numbers of threads) that you can use for those blocking calls and keep the main thread pool for regular hpx tasks
<ms[m]> looking if we have any examples on using them...
<zao> It's kind of a weird application, as the workloads are pretty much accepting jobs, downloading and ingesting data and JSON, making queries against the data. The biggest "computation" is probably decompressing zstd and processing NDJSON.
<zao> I'm leaning toward an async-like architecture and everything in the C++ space kind of sucks there. Right now I've got a bunch of synchronous threads that do request-response via TBB concurrent_queue:s.
<ms[m]> this is I think the only example/test we have for the service pools: https://github.com/STEllAR-GROUP/hpx/blob/1.7.1/libs/parallelism/executors/tests/unit/service_executors.cpp
<ms[m]> tbb is probably not a bad choice either, but I don't know how much it'll limit you in the future
<ms[m]> zao: asio?
<zao> Not using asio at the moment, but usually do for thread pools. Biggest problem there is that I can't really do much in the way of dependencies between posted tasks.
<zao> Downloads are curl via the multi interface running on a single thread, taking download jobs on a queue, pumping whatever it has, and notifying the results on a queue.
<zao> RPC infra is kind of blocking too, on its own threads.
<zao> It's a bit of a learning experience but I can't tell where to go sync and where to full pants-on-head async.
<ms[m]> zao: how serious is it if you make the wrong choice?
<zao> Just losing progress on a piece of infrastructure that I want out of the way and working for the rest of my hobby projects.
<zao> The application is a daemon that runs on an end-user computer, serving as an interface between my remote data store (over HTTP) and local client applications (over some sort of RPC), to stage and read datasets.
<zao> So a long-running process with a dynamic amount of clients that come and go and upwards of 200 GiB persistent state.
<zao> The things that normally tend to trip me up with async tooling is how to interact with it from regular threads, a lot of HPX as far as I remember kind of requires you to be a HPX thread to do things.
jehelset has quit [Ping timeout: 240 seconds]
hkaiser has joined #ste||ar
<deepak[m]> project already taken or in progress. If not, I would like to work on this, it would be great if anyone can give any insights about this.
<deepak[m]> And I was wondering can we use "PyBind11" instead of "Boost.Python", if possible.
<gonidelis[m]> deepak: apologies for not getting back to you
<gonidelis[m]> hkaiser: what do you say ^^?
<deepak[m]> gonidelis[m]: No problem :)
<hkaiser> deepak[m]: yah, pybind11 is better
<gonidelis[m]> deepak: afaik this project has not been implemented yet
<deepak[m]> hkaiser: thanks, I will report any further progress within this week
<hkaiser> cool~
<hkaiser> !
<hkaiser> gonidelis[m]: look at how it's used
jehelset has joined #ste||ar
<hkaiser> gonidelis[m]: f1 will be invoked for all partitions, f2 will be invoked afterwards to perform some reduction on the results of the invocation of f1
<gonidelis[m]> f1 is used for partitioning and f2 for reduction
<hkaiser> f1 is used for each partition, yes
<gonidelis[m]> ok
<gonidelis[m]> it is a generalized execution scheme that we apply in various end-user algos
<gonidelis[m]> mapreduce probably
<hkaiser> yes, some of the algorithms can be represented by such a scheme
<gonidelis[m]> nice
<hkaiser> as shown above, min/max/minmax are some of them
<gonidelis[m]> but we can also apply it when no reduction function is used
<hkaiser> looks like it
<gonidelis[m]> looks like u suspected we needed to support that case ;p
<gonidelis[m]> does it relate to the taskbench forkjoin runs?
<gonidelis[m]> though i don't see static_partitioner being used
<gonidelis[m]> from fork-join at all
<hkaiser> don't remember where it's used, see for yourself
diehlpk has joined #ste||ar
jehelset has quit [*.net *.split]
K-ballo has quit [*.net *.split]
diehlpk has quit [*.net *.split]
hkaiser has quit [*.net *.split]
akheir has quit [*.net *.split]
wash_ has quit [*.net *.split]
gdaiss[m] has quit [*.net *.split]
bhumit[m] has quit [*.net *.split]
rori[m] has quit [*.net *.split]
dkaratza[m] has quit [*.net *.split]
KordeJong[m] has quit [*.net *.split]
Kalium has quit [*.net *.split]
ericniebler[m] has quit [*.net *.split]
gonidelis[m] has quit [*.net *.split]
zao has quit [*.net *.split]
pedro_barbosa[m] has quit [*.net *.split]
heller[m] has quit [*.net *.split]
srinivasyadav227 has quit [*.net *.split]
ChanServ has quit [*.net *.split]
zao has joined #ste||ar
dkaratza[m] has joined #ste||ar
diehlpk has joined #ste||ar
ericniebler[m] has joined #ste||ar
gdaiss[m] has joined #ste||ar
srinivasyadav227 has joined #ste||ar
jehelset has joined #ste||ar
KordeJong[m] has joined #ste||ar
gonidelis[m] has joined #ste||ar
pedro_barbosa[m] has joined #ste||ar
K-ballo has joined #ste||ar
wash_ has joined #ste||ar
Kalium has joined #ste||ar
heller[m] has joined #ste||ar
bhumit[m] has joined #ste||ar
hkaiser has joined #ste||ar
akheir has joined #ste||ar
ChanServ has joined #ste||ar
rori[m] has joined #ste||ar
bhumit[m] has quit [Ping timeout: 252 seconds]
gnikunj[m] has quit [Ping timeout: 260 seconds]
ms[m] has quit [Ping timeout: 260 seconds]
PatrickDiehl[m] has quit [Ping timeout: 260 seconds]
jedi18[m] has quit [Ping timeout: 260 seconds]
deepak[m] has quit [Ping timeout: 260 seconds]
rori[m] has quit [Ping timeout: 245 seconds]
dkaratza[m] has quit [Ping timeout: 250 seconds]
KordeJong[m] has quit [Ping timeout: 250 seconds]
ericniebler[m] has quit [Ping timeout: 240 seconds]
gonidelis[m] has quit [Ping timeout: 240 seconds]
pedro_barbosa[m] has quit [Ping timeout: 250 seconds]
gdaiss[m] has quit [Ping timeout: 252 seconds]
heller[m] has quit [Ping timeout: 268 seconds]
srinivasyadav227 has quit [Ping timeout: 268 seconds]
jedi18[m] has joined #ste||ar
gnikunj[m] has joined #ste||ar
deepak[m] has joined #ste||ar
mdiers[m] has joined #ste||ar
<mdiers[m]> Hello again. I had asked a long time ago for an example of how I use the compression. In the meantime, however, quite a lot has changed in the structure that I can not find it again. Is there still an example of the use of compression?
KordeJong[m] has joined #ste||ar
dkaratza[m] has joined #ste||ar
rori[m] has joined #ste||ar
gonidelis[m] has joined #ste||ar
ericniebler[m] has joined #ste||ar
pedro_barbosa[m] has joined #ste||ar
diehlpk has left #ste||ar [#ste||ar]
gdaiss[m] has joined #ste||ar
srinivasyadav227 has joined #ste||ar
heller[m] has joined #ste||ar
<zao> gnikunj[m]: Singletons are not great due to several reasons. As they're globally accessible, they may be an invisible dependency in code, making both them and the calling code harder to reason about. Their lifetime is bothersome as well, as you have 0-1 instances, either creating one at program startup (init order problem) or lazily on first access (sensitive to call order). Destruction cannot happen until program end, which may be too late.
<zao> While cumbersome, it's often healthier to explicitly pass state into your objects as you construct them or have a more dynamic way of obtaining functionality you need via factory methods or other repositories.
ms[m] has joined #ste||ar
<hkaiser> the biggest issue however is that singletons are prone to initialization sequencing issues
<hkaiser> if you're not careful, that is
<zao> In summary, while they're awfully convenient, they're very prone to subtle error or brittleness.
bhumit[m] has joined #ste||ar
PatrickDiehl[m] has joined #ste||ar
rayw has joined #ste||ar
jehelset has quit [Ping timeout: 240 seconds]
diehlpk_work has joined #ste||ar
jehelset has joined #ste||ar
rayw has quit [Ping timeout: 256 seconds]
jehelset has quit [Ping timeout: 240 seconds]
diehlpk_work has quit [Ping timeout: 240 seconds]
diehlpk_work has joined #ste||ar