<Yorlik>
A big problem was the task migration inside my alloc function.
<hkaiser>
<quote>I fixed the last bug</quoe> ;-)
<Yorlik>
Just realized writing something thread safe can be even more tricky in a task environment, depending on the situation.
<hkaiser>
indeed - everything is moving at the same time
<Yorlik>
The good thing is, idle times went down to ~8% when I did large numbers of objects
<hkaiser>
very nice
<Yorlik>
The systemn gets more efficient, the more work it has
<Yorlik>
Still tweaking.
<Yorlik>
At some point I'd like to talk to you about the parameters in voice.
<Yorlik>
It seems under certain circumstances work is accumulationg on certain threads
<Yorlik>
And other threads seem underused.
<Yorlik>
I can see that from the engine spares they have
<Yorlik>
I guess it's an artifact coming from suboptimal numbers of the autochunker target time and a low workload
<Yorlik>
amongst others.
<Yorlik>
So - I believe tweaking is becoming an art form here.
<hkaiser>
could be
<hkaiser>
Yorlik: we need to find a way to make it runtime adaptive
<Yorlik>
it wouldn't be an issue to make the autochunker target time a dynamic parameter
<Yorlik>
But what logic?
<Yorlik>
To me that actually looks like an AI problem
<Yorlik>
Finding an optimum in a system with many chaotic parameters
<hkaiser>
the target time can be specified already
<Yorlik>
Where the most chaotic thing is the dynamic change of the workload in my case
<Yorlik>
Yes
<Yorlik>
I'll have to figure out some things, means I have to really learn how the system functions dynamically
K-ballo has quit [Quit: K-ballo]
akheir has quit [Ping timeout: 260 seconds]
<Yorlik>
Wow - bumping my autochunker target time from 400 to 5000 microseconds made my framerate like 10x faster - seems the overhead i have per chunk is still too big and lowering it helped. Mostly it's about the retrieval of the lua engine
K-ballo has joined #ste||ar
karame_ has quit [Remote host closed the connection]
<hkaiser>
fair
rtohid has quit [Remote host closed the connection]
<Yorlik>
It looks a bit lie this thing which struck me a while ago too
<Yorlik>
When you hold a lock and the task yields it's a built in warning in debug mode
<Yorlik>
You might wanna check if it goes away in a release build
<Yorlik>
You can block it by creating a special object.
<Yorlik>
weilewei ^^
<weilewei>
Yorlik ok, I haven't tried Release build
<weilewei>
I will give it a try
<Yorlik>
If it works I can give you a trick to make it work together with a caveat
<K-ballo>
gonidelis[m]: the compiler can't know whether `std::decay<Iter>::type` is a value or a type, so it assumes it is a value unless you say otherwise
nan111 has joined #ste||ar
<K-ballo>
sometimes the compiler can figure it out by looking at the context in which it is used (where only a type would be valid), and in those cases you don't need to say anything (but you still can)
<Yorlik>
I think that has confused me in the pas a bit - "typename" wasn't always used, just sometimes.
Nikunj__ has quit [Quit: Leaving]
<Yorlik>
But it makes sense the compiler sometimes needs it sometimes not
<K-ballo>
and that changes with standard version, making it even more confusing
<Yorlik>
Argh ... hell ..
<K-ballo>
I tend to still put it anywhere C++98 would have required it
<Yorlik>
I like verbose code - if you get too terse it gets confusing easily.
<K-ballo>
which means every time it is dependent, except in the base classes list
richard[m]1 has joined #ste||ar
noise[m] has joined #ste||ar
smith[m] has joined #ste||ar
rori has joined #ste||ar
kordejong has joined #ste||ar
mdiers[m] has joined #ste||ar
diehlpk_mobile[m has joined #ste||ar
ms[m] has joined #ste||ar
heller1 has joined #ste||ar
<hkaiser>
weilewei: it is what it says - you're holding a lock (mutex) while the thread holding it is supendeding
<hkaiser>
weilewei: this on it's own is not always a problem, but it can lead to nasty deadlocks so we diagnose it
<weilewei>
hkaiser is there any way to diagnose it? Apparently I have no idea what does libcds stree test is doing
<weilewei>
at least at this point
parsa[m] has joined #ste||ar
zao[m] has joined #ste||ar
<hkaiser>
weilewei: run it in a debugger, stop at the thrown exception and go up in the stack backtrace to see if you can spot the frame that holds the lock
parsa[m] is now known as Guest21318
<weilewei>
hkaiser ok
<hkaiser>
the figure out whether you need to hold the lock while suspending (could be the case) or find a workaround by unlocking it for the duration of the suspension
<hkaiser>
if it's necessary, add code that tells hpx to ignore the lock
<weilewei>
hkaiser what does it mean tells hpx to ignore the lock? Is there any example?
<Yorlik>
weilewei: Create this object: hpx::util::ignore_all_while_checking ignore_lock_checks;
<Yorlik>
As long as it exists these checks are ignored
<hkaiser>
weilewei: ignoring can be done by putting something like hpx::util::ignore_while_checking<Lock> il(&lock); on the stack before suspending (Lock is your lock-type, e.g. unique_lock<mutex>) and lock is your lock instance
<hkaiser>
right, alternatively ignore all locks like Yorlik suggested
<Yorlik>
Didn't know you could do it specifically - nice !
<weilewei>
Yorlik hkaiser Thanks! I will give it a try
<Yorlik>
We really should have a knowledge base or something or a text search for the entire IRC log to find these gems.
<hkaiser>
Yorlik: irclog has a search
<Yorlik>
hkaiser: Over the entire time?
<hkaiser>
yes
<Yorlik>
I think I missed that. Thanks !
<Yorlik>
Oh - theres two different searches - I see now
<Yorlik>
I always used the filter - totally not sufficient - but the real search is nice.
<K-ballo>
responsible for 3.27s wall time :| (6801 instantiations)
<weilewei>
so essentially, the code creates a vector of empty thread, and at the end, join each of them. each thread will run some specific tasks
<weilewei>
vector of hpx::thread
<Yorlik>
hkaiser: I am now getting the correct thread IDs. This will allow me a more reliable setup of thread local structures, like pools. Thanks a lot !
<Yorlik>
hkaiser - I commented on the PR. Github is doing funny things with my text though - I'll not edit it :D
<K-ballo>
taking std::result_of out of the way also helps significantly
<K-ballo>
msvc's INVOKE machinery is... not good... but luckily we don't even need it