nikunj has quit [Read error: Connection reset by peer]
quaz0r has quit [Ping timeout: 245 seconds]
diehlpk has quit [Quit: Leaving.]
diehlpk has joined #ste||ar
quaz0r has joined #ste||ar
<heller>
hkaiser: unintentionally
diehlpk has quit [Quit: Leaving.]
diehlpk has joined #ste||ar
hkaiser has quit [Ping timeout: 245 seconds]
diehlpk has quit [Quit: Leaving.]
quaz0r has quit [Ping timeout: 246 seconds]
quaz0r has joined #ste||ar
<Yorlik>
What would be the best way to build a UDP server inside an HPX app which could handle many many connections (thousands). I am currently looking at the Boost Asio examples and wonder if I should simply give each client it's own socket.start_receive() in an HPX::async. Good idea? Bad idea? What might be better?
rori has joined #ste||ar
tarzeau has joined #ste||ar
<heller>
Yorlik: bad idea
<heller>
Yorlik: asio is fine
<Yorlik>
OK - so - just a thread for the server and done?
<heller>
no, reuse what we already have ;)
<heller>
and use the async stuff from asio
<Yorlik>
I was thinking about creating one object per client to handle state, since I need to emulate a connection using UDP.
<Yorlik>
These could maybe be HPX::tasks and just receive incoming messages being demultiplexed.
<heller>
and then call it a day and program your UDP send/receive logic
<Yorlik>
Yes - that's the tutorial I'm working my way through
<Yorlik>
I just wonder - the remote_endpoint parameter ... is recveive from writing the sender into that structure?
<Yorlik>
At some point I need to demultiplex and send stuff to the handling client objects
<Yorlik>
client objects would have a timer and send a ping if not receiving messages for long enough and eventually get dropped
hkaiser has joined #ste||ar
<hkaiser>
heller: 'agent' is a term that is used by the standard for a different thing
<hkaiser>
an execution agent is something that does execute code, std::thread, for instance
<hkaiser>
or even our threads
<hkaiser>
context was not a bad work, even if this term is used for yet another thing, but not in official documents, iirc
<hkaiser>
bad word*
nikunj has joined #ste||ar
<jbjnr>
hkaiser: do you ever use the logging stuff?
<jbjnr>
(jus curious, not got any questions about it)
<heller>
hkaiser: well, P0443 talks about execution resource, execution context and execution agent
<heller>
hkaiser: and yes, the execution::agent class is used as an abstraction layer to std::thread or our threads right now
<heller>
hkaiser: I don't see where it is a different thing than what I used it for
<hkaiser>
you partially abstracted the underlying execution resource
<hkaiser>
by lifting the functionalities related to suspension into a separate API
<hkaiser>
heller: ^^
<hkaiser>
jbjnr: yes, we do use it - mostly for plugin loading problems and somesuch
<hkaiser>
not so much for thread debugging nowadays...
<jbjnr>
I'm removing all my parcelport logging code and the logging stuff I put in the scheduler etc (all using std::cout) and making it into a sime debug component that I can enable/disable with a template param and switch the code in/out
<jbjnr>
sime=simple
<hkaiser>
what's the problem with using the existing logging library?
<jbjnr>
too much stuff
<hkaiser>
if you touch everything, why invent yet another thing?
<jbjnr>
I only want messages from my small bits of code
<hkaiser>
if you get this through code review - go for it
<jbjnr>
jst to enable/disable debugging info for one class at a time etc
<hkaiser>
sure, I understand
<jbjnr>
it just replaces all the macro's in parcelport_logging.hpp
<K-ballo>
if only we could have the existing logging library do filtering
<heller>
hkaiser: ok, correct me if I am wrong: An Execution Resource is, for example a set of CPU cores. A execution context is something like a thread pool managing this resource, which is spawning agents on this resource. The agents then are able to be suspended resumed etc.
<heller>
hkaiser: at least that's how I read P0443
<jbjnr>
K-ballo: and the filtering would need to be smart so that classes that are turned off are actually not generating anything at all, rather than being filtered out at runtime.
<hkaiser>
heller: sure
<hkaiser>
but an agent is more than just something that you can suspend
<hkaiser>
K-ballo: I agree
<K-ballo>
sounds doable.. we already do tagged logging via macros
<hkaiser>
jbjnr: I would like to avoid adding custom point solutions again
<K-ballo>
each "tag" could be toggled independently, at compile time and/or runtime
<hkaiser>
you're jsut trying to get rid of one, why add a new one
<heller>
hkaiser: sure, I never said I implemented everything so far, just something to get us started ;)
<hkaiser>
heller: sure, np
<hkaiser>
the thing you have doesn't 'feel' like being an agent ;-)
<heller>
you probably have to go back a few revisions for execution agent, execution context and execution resource to be explained more prominently
<heller>
hkaiser: true, it doesn't own it. Is ownership of an agent an important property though?
aserio has quit [Ping timeout: 245 seconds]
<heller>
I am fine with calling it agent_view
<heller>
or agent_ref
aserio has joined #ste||ar
<heller>
or whatever, that probably makes the semantics a bit clearer then
<rori>
? thanks!
aserio has quit [Ping timeout: 245 seconds]
aserio has joined #ste||ar
rori has quit [Quit: bye]
aserio has quit [Ping timeout: 276 seconds]
<heller>
hkaiser: simbergm: about #4059 and #4090. I am not particular happy with having everything that has the name "executor" in it being moved into the execution module
<heller>
As I explained in one of the comments in #4090, the HPX thread pool specific executors, should be really be moved into a different module that builds upon the execution module
<heller>
same as any other concrete executor implementation, IMHO
<heller>
somehow github is down for me, so I can't comment directly :/
<hkaiser>
heller: the current parallel_executor module has only the non-thread-executors
<heller>
hkaiser: except for some forwarding headers
<hkaiser>
heller: ok, good point - even if those are just imported...
<heller>
sure
<heller>
creates a hard dependency nevertheless ;)
<hkaiser>
sure, currently it will depend on the 'big-blob' anyways
<hkaiser>
nobody knows how the threading subsystem will look like after being refactored
<heller>
sure
<heller>
I was hoping I could set one possible foundation to be built upon...
<heller>
in my head, with the work done in the execution_branch, we could easily move all our synchronization objects in that module
<heller>
at least the ones containing non-distributed stuff ;)
<heller>
mabye even distributed things ;)
<heller>
anyways
aserio has joined #ste||ar
jaafar has joined #ste||ar
aserio1 has joined #ste||ar
aserio has quit [Ping timeout: 245 seconds]
aserio1 is now known as aserio
jaafar has quit [Ping timeout: 265 seconds]
aserio has quit [Ping timeout: 246 seconds]
hkaiser has quit [Ping timeout: 245 seconds]
hkaiser has joined #ste||ar
jbjnr has quit [Read error: Connection reset by peer]
jaafar has joined #ste||ar
nikunj has quit [Read error: Connection reset by peer]
USTOBRE has joined #ste||ar
<USTOBRE>
Question about the documentation: In the following explanation for what the io counter component is, is "plugin plugin" a typo? "io_counter_component: A dynamically loaded plugin plugin that exposes I/O performance counters (only available on Linux)."