<ms[m]>
I'll send a similar email with a link to the issue once you're happy with it
<heller1>
ms[m]: hmm, It could be that some institutions might not want to share the application and what the application does?
<ms[m]>
heller: but then they don't comment...?
<ms[m]>
or do you mean we might want to know that some institution is using hpx but doesn't want to tell for what yet?
<ms[m]>
that's a good point... do we care more about applications or users, or do we just ask for either
<heller1>
right
<heller1>
I want a list of institutions and/or applications
<ms[m]>
heller: yep, gotcha
hkaiser has joined #ste||ar
<ms[m]>
heller: rearranged it a bit
<gonidelis[m]>
hkaiser: I reckon that's an automatic message there on my commit :q
<hkaiser>
gonidelis[m]: no, I put it there just now
<hkaiser>
;-)
<gonidelis[m]>
ahh... thanks a lot. I think we have the whole summer ahead of us so let's put that T-shirt Goal by the end of the period? (just to keep the motive :p)
<gonidelis[m]>
Nevertheless, maybe the feel that part of my code is up there (on master) is honestly the biggest satisfaction I could get!!!
<ms[m]>
hkaiser always sounds like a robot...
<ms[m]>
gonidelis: congrats on getting that pr in! you're making really nice progress :D
<hkaiser>
gonidelis[m]: I need to be able to get back to my office first anyways, might take a while...
<gonidelis[m]>
ms[m]: thanks a lot!!!!
<gonidelis[m]>
hkaiser: np... back to work! ;)
<Yorlik>
Are task IDs guaranteed to be unique over the runtime of a program?
<ms[m]>
Yorlik: no
<Yorlik>
Just at a given time, I assume then?
<ms[m]>
they get reused
<ms[m]>
exactly
<Yorlik>
OK. Makes sense.
<ms[m]>
having two different tasks exist at the same time means they're not different tasks ;)
<Yorlik>
Could it be, that such a reuse happens really quickly?
<Yorlik>
Especially if there are many tasks.
<ms[m]>
* having two different tasks exist at the same time with the same id means they're not different tasks ;)
<ms[m]>
yes
<ms[m]>
it depends mostly on how often tasks yield
<Yorlik>
How likely is that?
<ms[m]>
if most tasks don't yield and just run to completion the next task can reuse the same id immediately
<Yorlik>
because I am running two lambdas at start and end of a task
<Yorlik>
And they store certain data keyed with the id
<Yorlik>
But I guess the lambdas are outside the task.
<ms[m]>
how likely is yielding? depends on what your tasks do
<Yorlik>
I yield a lot
<ms[m]>
from not following your discussions here earlier I get the impression that you keep your tasks around for quite a long time
<ms[m]>
right
<Yorlik>
Not really
<Yorlik>
A task is just a chunk in a poarloop
<Yorlik>
But I am using a single lua state over the entire task
<ms[m]>
mmh, that tasks can be long lived...
<ms[m]>
but in any case, not sure what the question is anymore
<Yorlik>
Probably it's not related to my current issue, but I have a weird crash currently which might be a race
<Yorlik>
It seems not to happen if I have a really long break between my frame updates
<Yorlik>
But if I run my frames immediatly after the previous frame has stopped it crashes.
<Yorlik>
And the errors seem to come deeply out of the Lua call stack.
<Yorlik>
So I guess there might be something that makes a state being reused in another thread
<Yorlik>
I have to dig deeper - probably I just stupidly shot myself in the foot somewhere by overlooking something.
diehlpk_work has joined #ste||ar
<hkaiser>
Yorlik: the start and end lambdas are run by the same task, definitely
<hkaiser>
that's the whole point of having them
<hkaiser>
a task id may be reused only after the task ran to completion (was terminated), it will never be reused while a task is just suspended
<Yorlik>
That's good to know.
<Yorlik>
I believe the Lua State crashes are just the symptom - I'm making and error somewhere. But I don't know yet.
<Yorlik>
What's strange is, that a really long break between the frames make it effectively go away (100ms)
<Yorlik>
with 25 ms or 50 ms breaks it just happens more rare
<Yorlik>
Might be related to my timed messages
<hkaiser>
gonidelis[m], heller1, rori: do we have the GSoC meeting now (in 10 mins)?
<jbjnr>
what's odd is that hpx:async(...) works, but hpx::apply(...) doesn't
<hkaiser>
sounds weird
<jbjnr>
I'm converting the cuda helper code to an executor and adding a new cuda event polling function like the mpi one
<Yorlik>
hkaiser: We solved the race - I did a very bad thing here, but not so bad as i thought..
<jbjnr>
hkaiser:
<jbjnr>
/home/biddisco/src/hpx-branches/cuda-futures/libs/executors/include/hpx/executors/apply.hpp:54: error: no type named ‘type’ in ‘struct std::enable_if<false, bool>’
<hkaiser>
Yorlik: good
<jbjnr>
deferred_invokable seems to think "no"
<hkaiser>
jbjnr: could be
<hkaiser>
async uses decltype(auto) nowadays, that could be the difference
<Yorlik>
I had to introduce a lock at two places where i thought I'd not need one. But it's not performance critical - but it adds a lock at every task creation and release
<Yorlik>
I might find a better solution later for it. It was related to the way how I allocate and release Lua states
<jbjnr>
I tried using declytype(auto) for the post, but since it returns void anyway ....
<hkaiser>
yah, it's the facilities used by post
<jbjnr>
aha. you mean iside the apply code
<hkaiser>
right
<hkaiser>
comment out the enable_if and see where it breaks, that will give you better understanding
<jbjnr>
but other executors use the same signature without problem. it's not right
<jbjnr>
k
<hkaiser>
clang is usually good at telling you why things have been SFINAE'd out
bita_ has joined #ste||ar
<jbjnr>
hmm. execution.hpp non void function not returning anything, changing it to return post.... seems to fix it. I will experiment some more
<K-ballo>
that shouldn't have fixed it
<jbjnr>
correct. the right fix is remove the return type deduction completely, it should just be a void function
<jbjnr>
getting rid of the auto now and retesting
<jbjnr>
grrrr...
<jbjnr>
aha. I see the problem again - it's because it wants to call to the cublas function, but it is missing a parameter, because the executor fills one in for it. Same problem I had with the mpi executor originally
<jbjnr>
what did you do to fix that hkaiser
<jbjnr>
the deferred_invoke return type is invalid because not all the params are present.
<jbjnr>
I'll change apply to do the same and get rid of the enable if
<K-ballo>
that sounds not right.. is the enable_if sfinae not wanted?
<jbjnr>
ok, that fixes the compilation proble, but is it the right thing to do?
<jbjnr>
^^^messages crossed
<jbjnr>
I guess it just shifts the comppilation error elsewhere if there is a real problem
<K-ballo>
if we didn't want sfinae in it we shouldn't have used sfinae in it, the question is whether the sfinae was intended or an artifact
<jbjnr>
indeed. since hkaiser has removed it from async, I guess we don't need it any more and I can remove it from apply now and rely on return type deduction of the final layer of the onion
<K-ballo>
return type deduction doesn't sfinae
<hkaiser>
K-ballo: I removed it as it was over-constraining things
<hkaiser>
the underlying facilities constrain things sufficiently
<K-ballo>
so I gather we never actually wanted sfinae in there
<hkaiser>
it would have moved possible errors up the instantiation stack
<jbjnr>
great. I'm happy then. that was what got in the way of my mpi first attempt. glad I understand it now
<K-ballo>
those should have been static_asserts then
<K-ballo>
sfinae is terrible for aiding diagnostics
<jbjnr>
:+1
<hkaiser>
we know that today, but we (at least I) didn't know that when we implemented that
<Yorlik>
Can't you replace a lot of sfinae with if constexpr ?
<heller1>
Yes
<heller1>
C++17
<Yorlik>
Argh - forgot hpx is c++14 - right?
<Yorlik>
Or is it 11?
<K-ballo>
similarly, we shouldn't be replacing a lot of sfinae with if constexpr, we should be replacing tag dispatching with if constexpr
jaafar has joined #ste||ar
<jbjnr>
Yorlik: 14 minimum now
<Yorlik>
Allright. Thanks.
sayefsakin has joined #ste||ar
kale_ has joined #ste||ar
kale_ has quit [Client Quit]
kale_ has joined #ste||ar
kale[m] has joined #ste||ar
kale_ has quit [Client Quit]
parsa has quit [Remote host closed the connection]
parsa has joined #ste||ar
parsa has quit [Remote host closed the connection]
parsa has joined #ste||ar
parsa has quit [Remote host closed the connection]
parsa has joined #ste||ar
hkaiser has quit [Quit: bye]
<bita_>
K-ballo, can I ask a question?
<K-ballo>
bita_: you may try
<bita_>
I am trying to write a distributed version of csv_read. If a locality wants to start reading lines from a specific line, is there an option better than std::getline? I am looking into seekg, but haven't found the best option
<K-ballo>
you don't know where the line ends and the next one starts without scanning the entire line
<K-ballo>
you could seek, then scan the remaining of the line for the end, then start right after that?
<K-ballo>
so seekg + getline and ignore
<K-ballo>
but you can't seek to a specific line, unless you've scanned the entire file and already know where each one starts