<zao>
You typically only need `template ` in places where a disambiguation is required or where you're declaring a nested template.
<gonidelis>
is `algorithm_result` a template instatiations although?
<zao>
Like in cases akin to `void D::f() { B::template get<9001>(); }` where you're accessing something in a dependent base.
<zao>
If you're already in a context where a type is expected I don't know of any reason why you'd use the keyword.
<zao>
(unless there's language parts I'm not familiar with, like half of what HPX uses :D )
<zao>
Are you investigating some problem or just trying to understand some code?
<gonidelis>
Just trying to understand the code
<gonidelis>
I want to be sure that I can reproduce what I read and not just comprehend in an inactive way...
<gonidelis>
U said "dependent base". Yesterday I was reading articles about typenames / templates for like 3-4 hours and I think I am still confused on when a name actually is a disambiguation....
<zao>
Say that you've got `template <typename B> struct D : B {};` or `template <typename T> struct D : B<T> {};`
<zao>
The base there is dependent (on a template argument) and the rules for implicit lookup of base members in a derived type do not apply and you need to more explicitly mention what the thing you're looking up is, and where it lives.
<gonidelis>
So both commands produce warnings?
<zao>
Those definitions are fine on their own.
<zao>
The problem is if you're in a member function and try to refer to something from a dependent base. At the point of parsing the template it's rather impossible to know what a name refers to in the base, or what base it's referring to at all.
<gonidelis>
Can u use a synoym for base plz?
<gonidelis>
I think I am pretty close to getting what you are saying
<zao>
I can say base class or interface if it helps :)
<gonidelis>
So let me get this straight: the problem is that the compiler has litteraly no idea what `B` or `T`could be in context of the class???
<hkaiser>
perhaps something that was left locked on a previous thread exit (as you disabled all checks..)
<Yorlik>
I'll restart and get the first exception
<hkaiser>
I think we don't check for locked locks on thread exist (but we should)
<hkaiser>
*thread exit*
<hkaiser>
hold on, this _is_ on thread exit
<hkaiser>
you left something locked and ignored it
<Yorlik>
What do you mean with that?
<hkaiser>
then re-enabled its tracking so hit triggers at thread exit
<Yorlik>
All my locks are implemented as lock_guard
<Yorlik>
with a spinlock
<hkaiser>
your thread exits with a locked lock hanging around
nikunj97 has quit [Ping timeout: 256 seconds]
<hkaiser>
hmmm
<Yorlik>
It simply impossible I am locking and exiting - or something crashes
<Yorlik>
Every lock is strictly scoped
<hkaiser>
Yorlik: the exception is triggered here: after the actual thread function returned
<hkaiser>
well
<hkaiser>
you could have ignored the lock, then suspended the hpx thread, it got resumed on a different core and the next hpx thread on the old core sees the locked lock and complains about it
<hkaiser>
please reconsider leaving the lock locked while suspending, do you really need that?
<Yorlik>
I could do several things
<hkaiser>
I think the locks are ignored on a core-by-core basis
<hkaiser>
(iirc)
<Yorlik>
Maybe it's timne for me to implement proper logging :)
<Yorlik>
The locks helped me to get clear output while a lua state is being initialized
<Yorlik>
So their output on init doesn't get intermingled
<Yorlik>
I'll try to work around that.
<Yorlik>
However: Should I make an issue out of this? I mean do you have a realistic chance to improve your checks to consider thread migration?
<hkaiser>
not sure
<hkaiser>
I've never seen this before
<Yorlik>
I am pretty sure it is my init() function, sionce when I createmany engines it happend faster and more often
<Yorlik>
Let me remove that one lock and see what happens
<Yorlik>
Actually I could enable it dynamically
<Yorlik>
Because I do not need it always - only when debugging the engine initialization lua scrips
<hkaiser>
Yorlik: yah, the lock registration is entirely thread_local
<hkaiser>
I think I could work around that
<Yorlik>
If you can do it with no or minimal overhead as an option for tricky cases I tzhink that would be useful.
<Yorlik>
After all it is meant for development, not production
<Yorlik>
The good thing about this problem is, it forces me to reflect on every lock :D
<hkaiser>
yes
<Yorlik>
If a thread writes to a slot in a std::unordered_map and another reads, but it is guaranteed to be another slot - is that UB or safe?
<hkaiser>
Yorlik: probably UB
<hkaiser>
Yorlik: might be safe if the entries exist
<Yorlik>
Anothger portion to redesign.
<Yorlik>
How can I store something on a task and query it inside ?
<Yorlik>
Because that map is a crappy workaround
<Yorlik>
I just need to store a reference to a lua engine
<Yorlik>
So every update can get it from the chunks task
<hkaiser>
Yorlik: you have a void* (size_t) at your disposal, I thin kwe've talked about that
<hkaiser>
I even created an example for you demonstrating how to delete things ince the thread exited
<Yorlik>
I think I didn't understand it yet.
<Yorlik>
Dang - somehow I missed something.
<hkaiser>
each thread can store a user defined size_t that it carries around for you
<hkaiser>
you can set it and query it using the thread's id
<Yorlik>
So basically I use that size_t as a raw pointer?
<hkaiser>
yes
<Yorlik>
Wow - you using a raw pointer ;)
<hkaiser>
I'm using a size_t
<hkaiser>
casting to/from void* is on you ;-)
<Yorlik>
How else would it make any sense?
<hkaiser>
sure, sure
<Yorlik>
I think I'll create an object in on_start and delete it in on_exit
<hkaiser>
right
<Yorlik>
Is that mechnic already available in the lambdas?
<hkaiser>
what lambdas?
<Yorlik>
on_start and on_exit
<hkaiser>
btw: I have a solution for the held locks problem, I think
<Yorlik>
That custom executor I am using
<hkaiser>
sure, on_start/on_end are executed by the hpx thread that will run the actual function
<Yorlik>
I think the held locks exploding is somehow a good thing. It gives you a thorough warning on possible deadlocks
<Yorlik>
Though - I guess if theres task switching involved they will eventually dissolve anyways
<Yorlik>
Like the holding task will be rescheduled soonish
<hkaiser>
Yorlik: yah, that's the purpose of the lock registration
<hkaiser>
make you think twice
<Yorlik>
Don't remove it - just make the switch to turn it off work. I think this is really dangerous territory, especially since it might just work for a while.
<Yorlik>
So - that warning is really good, imo.
<hkaiser>
Yorlik: you can already turn it off (through some hpx.register_locks=0 or somesuch)
<hkaiser>
I will not remove it, I will fix the problem that the lock stays registered if the thread is resumed on a different core
<Yorlik>
I won't do it - I'm too new to all this - being knocked here is a good thing - also I am rethinking suboptimal design choices..