hkaiser changed the topic of #ste||ar to: STE||AR: Systems Technology, Emergent Parallelism, and Algorithm Research | stellar.cct.lsu.edu | HPX: A cure for performance impaired parallel applications | github.com/STEllAR-GROUP/hpx | Buildbot: http://rostam.cct.lsu.edu/ | Log: http://irclog.cct.lsu.edu/ | GSoD: https://developers.google.com/season-of-docs/
diehlpk has joined #ste||ar
nikunj has quit [Read error: Connection reset by peer]
quaz0r has quit [Ping timeout: 245 seconds]
diehlpk has quit [Quit: Leaving.]
diehlpk has joined #ste||ar
quaz0r has joined #ste||ar
<heller> hkaiser: unintentionally
diehlpk has quit [Quit: Leaving.]
diehlpk has joined #ste||ar
hkaiser has quit [Ping timeout: 245 seconds]
diehlpk has quit [Quit: Leaving.]
quaz0r has quit [Ping timeout: 246 seconds]
quaz0r has joined #ste||ar
<Yorlik> What would be the best way to build a UDP server inside an HPX app which could handle many many connections (thousands). I am currently looking at the Boost Asio examples and wonder if I should simply give each client it's own socket.start_receive() in an HPX::async. Good idea? Bad idea? What might be better?
rori has joined #ste||ar
tarzeau has joined #ste||ar
<heller> Yorlik: bad idea
<heller> Yorlik: asio is fine
<Yorlik> OK - so - just a thread for the server and done?
<heller> no, reuse what we already have ;)
<heller> and use the async stuff from asio
<Yorlik> I was thinking about creating one object per client to handle state, since I need to emulate a connection using UDP.
<Yorlik> These could maybe be HPX::tasks and just receive incoming messages being demultiplexed.
<Yorlik> Still trying to get my head around socket.receive and socket.receive_from
<heller> and you can get the a pool with hpx::get_thread_pool()
<heller> where you give it a name
<heller> for example "io_pool"
<heller> and then call it a day and program your UDP send/receive logic
<Yorlik> Yes - that's the tutorial I'm working my way through
<Yorlik> I just wonder - the remote_endpoint parameter ... is recveive from writing the sender into that structure?
<Yorlik> At some point I need to demultiplex and send stuff to the handling client objects
<Yorlik> client objects would have a timer and send a ping if not receiving messages for long enough and eventually get dropped
hkaiser has joined #ste||ar
<hkaiser> heller: 'agent' is a term that is used by the standard for a different thing
<hkaiser> an execution agent is something that does execute code, std::thread, for instance
<hkaiser> or even our threads
<hkaiser> context was not a bad work, even if this term is used for yet another thing, but not in official documents, iirc
<hkaiser> bad word*
nikunj has joined #ste||ar
<jbjnr> hkaiser: do you ever use the logging stuff?
<jbjnr> (jus curious, not got any questions about it)
<heller> hkaiser: well, P0443 talks about execution resource, execution context and execution agent
<heller> hkaiser: and yes, the execution::agent class is used as an abstraction layer to std::thread or our threads right now
<heller> hkaiser: I don't see where it is a different thing than what I used it for
<hkaiser> you partially abstracted the underlying execution resource
<hkaiser> by lifting the functionalities related to suspension into a separate API
<hkaiser> heller: ^^
<hkaiser> jbjnr: yes, we do use it - mostly for plugin loading problems and somesuch
<hkaiser> not so much for thread debugging nowadays...
<jbjnr> I'm removing all my parcelport logging code and the logging stuff I put in the scheduler etc (all using std::cout) and making it into a sime debug component that I can enable/disable with a template param and switch the code in/out
<jbjnr> sime=simple
<hkaiser> what's the problem with using the existing logging library?
<jbjnr> too much stuff
<hkaiser> if you touch everything, why invent yet another thing?
<jbjnr> I only want messages from my small bits of code
<hkaiser> if you get this through code review - go for it
<jbjnr> jst to enable/disable debugging info for one class at a time etc
<hkaiser> sure, I understand
<jbjnr> it just replaces all the macro's in parcelport_logging.hpp
<K-ballo> if only we could have the existing logging library do filtering
<heller> hkaiser: ok, correct me if I am wrong: An Execution Resource is, for example a set of CPU cores. A execution context is something like a thread pool managing this resource, which is spawning agents on this resource. The agents then are able to be suspended resumed etc.
<heller> hkaiser: at least that's how I read P0443
<jbjnr> K-ballo: and the filtering would need to be smart so that classes that are turned off are actually not generating anything at all, rather than being filtered out at runtime.
<hkaiser> heller: sure
<hkaiser> but an agent is more than just something that you can suspend
<hkaiser> K-ballo: I agree
<K-ballo> sounds doable.. we already do tagged logging via macros
<hkaiser> jbjnr: I would like to avoid adding custom point solutions again
<K-ballo> each "tag" could be toggled independently, at compile time and/or runtime
<hkaiser> you're jsut trying to get rid of one, why add a new one
<heller> hkaiser: sure, I never said I implemented everything so far, just something to get us started ;)
<hkaiser> heller: sure, np
<hkaiser> the thing you have doesn't 'feel' like being an agent ;-)
<hkaiser> anyways, gtg
hkaiser has quit [Ping timeout: 246 seconds]
aserio has joined #ste||ar
<jbjnr> K-ballo: where do we do tagged logging?
daissgr_ has joined #ste||ar
<jbjnr> ok. I thought you meant something differnt
daissgr has quit [Ping timeout: 264 seconds]
jaafar has quit [Ping timeout: 264 seconds]
diehlpk has joined #ste||ar
diehlpk has quit [Quit: Leaving.]
daissgr has joined #ste||ar
daissgr has quit [Remote host closed the connection]
hkaiser has joined #ste||ar
<heller> hkaiser: so what's missing for you to have the right 'feeling'?
<hkaiser> heller: I think an execution agent object owns the underlying OS/RTS resource
<hkaiser> your agent does not own it
<heller> ugh
<hkaiser> I'd be fine with calling the new ting agent_proxy or agent_ref
<heller> I disagree here
<hkaiser> std::thread owns the pthread
<hkaiser> hpx::thread owns our thread
<heller> the pthread is an implementation detail, and the pthread is *not* the resource
<hkaiser> I agree, I meant 'resource' not in the sense of p0443
<heller> in the sense of p0443, the context owns the resource
<hkaiser> hmmm
<heller> and even there I am not entirely sure
<hkaiser> std::thread is not a context, is it?
<jbjnr> context_view
<heller> since there's no reason why a context couldn't share a resource (aka CPU core) with some other context
<heller> I don't think it is a context
<hkaiser> right, its an agent
<hkaiser> at least how I understand it
<heller> yes
<heller> however, it doesn't have a context ;)
<heller> and no resource
<hkaiser> it has an implicit system wide context
<heller> yes, sure
<hkaiser> the thing you created refers to an agent in a type-erased way
<heller> yes
<hkaiser> it does not own the agent
<hkaiser> it is not the agent
<hkaiser> as I understand it anyways
<hkaiser> gtg again, sorry
<rori> I land here but what's p0443 ? :D
<rori> and where can I find it ?
<heller> rori: http://wg21.link/p0443
<rori> thanks !
<heller> you probably have to go back a few revisions for execution agent, execution context and execution resource to be explained more prominently
<heller> hkaiser: true, it doesn't own it. Is ownership of an agent an important property though?
aserio has quit [Ping timeout: 245 seconds]
<heller> I am fine with calling it agent_view
<heller> or agent_ref
aserio has joined #ste||ar
<heller> or whatever, that probably makes the semantics a bit clearer then
<rori> ? thanks!
aserio has quit [Ping timeout: 245 seconds]
aserio has joined #ste||ar
rori has quit [Quit: bye]
aserio has quit [Ping timeout: 276 seconds]
<heller> hkaiser: simbergm: about #4059 and #4090. I am not particular happy with having everything that has the name "executor" in it being moved into the execution module
<heller> As I explained in one of the comments in #4090, the HPX thread pool specific executors, should be really be moved into a different module that builds upon the execution module
<heller> same as any other concrete executor implementation, IMHO
<heller> somehow github is down for me, so I can't comment directly :/
<hkaiser> heller: the current parallel_executor module has only the non-thread-executors
<heller> hkaiser: except for some forwarding headers
<heller> for example
<hkaiser> heller: ok, good point - even if those are just imported...
<heller> sure
<heller> creates a hard dependency nevertheless ;)
<hkaiser> sure, currently it will depend on the 'big-blob' anyways
<hkaiser> nobody knows how the threading subsystem will look like after being refactored
<heller> sure
<heller> I was hoping I could set one possible foundation to be built upon...
<heller> in my head, with the work done in the execution_branch, we could easily move all our synchronization objects in that module
<heller> at least the ones containing non-distributed stuff ;)
<heller> mabye even distributed things ;)
<heller> anyways
aserio has joined #ste||ar
jaafar has joined #ste||ar
aserio1 has joined #ste||ar
aserio has quit [Ping timeout: 245 seconds]
aserio1 is now known as aserio
jaafar has quit [Ping timeout: 265 seconds]
aserio has quit [Ping timeout: 246 seconds]
hkaiser has quit [Ping timeout: 245 seconds]
hkaiser has joined #ste||ar
jbjnr has quit [Read error: Connection reset by peer]
jaafar has joined #ste||ar
nikunj has quit [Read error: Connection reset by peer]
USTOBRE has joined #ste||ar
<USTOBRE> Question about the documentation: In the following explanation for what the io counter component is, is "plugin plugin" a typo? "io_counter_component: A dynamically loaded plugin plugin that exposes I/O performance counters (only available on Linux)."