hkaiser changed the topic of #ste||ar to: STE||AR: Systems Technology, Emergent Parallelism, and Algorithm Research | stellar.cct.lsu.edu | HPX: A cure for performance impaired parallel applications | github.com/STEllAR-GROUP/hpx | Buildbot: http://rostam.cct.lsu.edu/ | Log: http://irclog.cct.lsu.edu/ | GSoC2018: https://wp.me/p4pxJf-k1
parsa has joined #ste||ar
parsa has quit [Client Quit]
anushi has joined #ste||ar
parsa has joined #ste||ar
parsa has quit [Client Quit]
anushi has quit [Ping timeout: 260 seconds]
eschnett has joined #ste||ar
anushi has joined #ste||ar
anushi has quit [Ping timeout: 260 seconds]
anushi has joined #ste||ar
K-ballo has quit [Quit: K-ballo]
hkaiser has quit [Quit: bye]
anushi has quit [Ping timeout: 260 seconds]
parsa has joined #ste||ar
parsa has quit [Client Quit]
galabc has joined #ste||ar
<galabc> Hi I am having trouble building a simple code with cmake
<galabc> I want to use the program_options from the boost library
parsa has joined #ste||ar
<galabc> Im simply trying to compile the first.cpp file located at BOOST_ROOT/libs/program_options/example on the rostam clusters
<galabc> This is my cmake file
<galabc> But I get the error
<galabc> I dont understand this error since the executable example is generated in the previous file
parsa has quit [Quit: Zzzzzzzzzzzz]
<zao> galabc: add_executable is the built-in CMake function, HPX's is spelled hpx_add_executable or something.
<zao> Regular CMake targets don't get any _exe suffix, they're named exactly the thing you pot in.
<zao> *put in
<zao> For "add_executable(example"; you'd "target_link_libraries(example"
<galabc> ah ok
<galabc> but I remembr that for some hpx_executable I had to put name_exe
<zao> That's what I'm trying to say. hpx_add_executable is a complicated function, that among other things, adds _exe to the end of the target name.
<zao> Regular add_executable does none of that.
<zao> And then, sleep! :D
<galabc> ah I see
<galabc> Yes u are right I must go sleep haha
galabc has quit [Quit: Leaving]
parsa has joined #ste||ar
nanashi55 has quit [Ping timeout: 256 seconds]
nanashi55 has joined #ste||ar
parsa has quit [Quit: Zzzzzzzzzzzz]
parsa has joined #ste||ar
parsa has quit [Quit: Zzzzzzzzzzzz]
nikunj97 has joined #ste||ar
nikunj97 has quit [Client Quit]
jaafar has quit [Ping timeout: 240 seconds]
anushi has joined #ste||ar
david_pfander1 has joined #ste||ar
nikunj has joined #ste||ar
david_pfander1 has quit [Ping timeout: 245 seconds]
anushi has quit [Remote host closed the connection]
jakub_golinowski has joined #ste||ar
jakub_golinowski has quit [Ping timeout: 256 seconds]
jakub_golinowski has joined #ste||ar
hkaiser has joined #ste||ar
eschnett has quit [Quit: eschnett]
K-ballo has joined #ste||ar
mcopik has joined #ste||ar
nikunj has quit [Quit: Leaving]
eschnett has joined #ste||ar
hkaiser has quit [Quit: bye]
<ms[m]1> jakub_golinowski: are you free for a call at our usual time?
<jakub_golinowski> ms[m]1, yes I am
diehlpk_work has joined #ste||ar
<diehlpk_work> ms[m]1, Have you seen mz e-mail with respect to the mentor summit?
<ms[m]1> diehlpk_work: ah, saw your most recent one only now
<ms[m]1> obviously I'd be very happy to go
<diehlpk_work> Sure, please let me know when you registered
<diehlpk_work> Please contact Adrian for refund of your travel expenses and other related questions to this
<ms[m]1> sure
galabc has joined #ste||ar
galabc has quit [Client Quit]
galabc has joined #ste||ar
<ms[m]1> diehlpk_work: ok, thanks!
hkaiser has joined #ste||ar
galabc has quit [Ping timeout: 240 seconds]
jaafar has joined #ste||ar
galabc has joined #ste||ar
galabc has quit [Quit: Leaving]
parsa has joined #ste||ar
parsa has quit [Client Quit]
david_pfander has quit [Quit: david_pfander]
<ms[m]1> jakub_golinowski: I have a feeling you can't get the main thread pool via the normal means since it's not a thread_pool_base
<ms[m]1> and HPX_TLL_PUBLIC is in fact just "PUBLIC"
<jakub_golinowski> ms[m]1, you mean the thread_pool object (as opposed to executor object)?
<ms[m]1> well, executor as well it seems like
<ms[m]1> both of them rely on get_self_id
<github> [hpx] hkaiser force-pushed thread_start_stop_events from 8992db5 to d8d2a55: https://git.io/fNal8
<github> hpx/thread_start_stop_events d8d2a55 Hartmut Kaiser: Allowing to register thread event functions (start/stop/error)
<jakub_golinowski> ms[m]1, but I mean the executor you can get like this:
<jakub_golinowski> hpx::threads::executors::main_pool_executor scheduler;
<ms[m]1> jakub_golinowski: yeah, that works, but I meant getting the current executor/pool (`hpx::this_thread::get_executor` or something like that)
<jakub_golinowski> ms[m]1, so I will for now just refrain from doing that
<jakub_golinowski> also I have just commited a working version to github
<jakub_golinowski> Windows open and the processing happens as expected. Nevertheless some options from the Settings menu does not work as expected -> which is probably due to deleted signals I will look into that now
<jakub_golinowski> ms[m]1, How should I interpret such an error? {what}: data has already been set for this future: HPX(promise_already_satisfied)
<ms[m]1> jakub_golinowski: hmm, a future has a counterpart called a promise, the promise is where you set a value and the future is where you get a value
<jakub_golinowski> ms[m]1, but I did not use any promise
<ms[m]1> in what context does this happen? are you using something other than just plain async and futures?
<jakub_golinowski> ms[m]1, to give more context this error occurs when i press the X on the GUI. So I guess the exit sequence is not correct and nasty stuff is happening
<ms[m]1> mmh, hard to say anything without code
<ms[m]1> do you get a line number/stack trace/something?
<jakub_golinowski> ms[m]1, so the error is not deterministic I get different things -> I am now tracking what exactly happens after the X button is pressed
<jakub_golinowski> ms[m]1, I remember that future is usable only once, yes?
<ms[m]1> jakub_golinowski: it's move only and I think you can only call get once
<jakub_golinowski> ms[m]1, because my thaughts went into the direction that having a future as a member field may not be a good idea
<jakub_golinowski> won't pointer be better -> such that I can delete the future and create next?
<ms[m]1> it could be, depends maybe on where you create a new one, you can always just assign a new one
<ms[m]1> but I'll have a look at the code a bit closer now
<jakub_golinowski> ms[m]1, or when I use the "=" operator on the class member variable of type future assigning a newly created future then old one is deleted automatically and I have a brand new future?
<ms[m]1> yes
nikunj has joined #ste||ar
<ms[m]1> jakub_golinowski, is this about for example hpx::future<void> captureThreadFinished?
mcopik has quit [Ping timeout: 260 seconds]
<jakub_golinowski> ms[m]1, yes - this is it
<ms[m]1> ok
<nikunj> hkaiser, did you get time to look into my pr?
_bibek_ has joined #ste||ar
bibek has quit [Ping timeout: 265 seconds]
_bibek_ is now known as bibek
hkaiser has quit [Quit: bye]
parsa[w] has quit [Read error: Connection reset by peer]
parsa[w] has joined #ste||ar
<jakub_golinowski> ms[m]1, I am working on changing the resolution so far. I undertook the approach in which I want to restart the capture task each time resolution is changed because otherwise I was getting resource busy errors
<jakub_golinowski> However when I set the flag and wait for the future to end and then reschedule the future I get the following error
<jakub_golinowski> What is interesting I do not get this error when I do a step-by-step debug
<jakub_golinowski> it seems like some kind of race condition
nikunj has quit [Quit: Leaving]
<jakub_golinowski> ms[m]1, the solution seems to be using hpx::apply -> which further proves that main thread pool requires a special treatment
<ms[m]1> jakub_golinowski: hmm, do you call wait on the main thread?
<diehlpk_work> Your current usage represents 683% of your STEllAR-GROUP Linux plan's limit. Please upgrade in order to ensure no disruption in building.
<diehlpk_work> Has someone seen this warning too?
<zao> Circle?
<diehlpk_work> Yes
<K-ballo> 683% wow
<zao> Imagine if we had a test suite that didn't take ages to run. And a codebase that didn't take ages to build :D
<K-ballo> you may say I'm a dreamer...
<K-ballo> I failed horribly at getting build times under control
<diehlpk_work> At least I updated HPXCL to curcle-ci 2.0
<diehlpk_work> heller_, https://github.com/STEllAR-GROUP/docker_build_env/blob/master/circle.yml Will you take care to update this to circle-ci 2.0 or should I have a look?
<ms[m]1> diehlpk_work: yeah, saw it today, I'm a bit worried
<ms[m]1> heller: didn't you get some special deal from circleci? hope they won't change with the change to 2.0
<diehlpk_work> Projects currently running on CircleCI 1.0 will no longer be supported after August 31, 2018. Please migrate to CircleCI 2.0.
<ms[m]1> diehlpk_work: 2.0 should already be in use, but maybe everything is messed up now...
<diehlpk_work> ms[m]1, Not for the docker repo or?
<diehlpk_work> At least I see the warning when I click on this repo
<ms[m]1> Ah, just talking about hpx
<diehlpk_work> yes, HPX is updated
<diehlpk_work> HPXCL will be soon oce I merge to master
galabc has joined #ste||ar
galabc has quit [Quit: Leaving]
eschnett has quit [Quit: eschnett]
jakub_golinowski has quit [Ping timeout: 276 seconds]
hkaiser has joined #ste||ar
mcopik has joined #ste||ar
mcopik has quit [Ping timeout: 248 seconds]
parsa has joined #ste||ar