aserio changed the topic of #ste||ar to: STE||AR: Systems Technology, Emergent Parallelism, and Algorithm Research | stellar.cct.lsu.edu | HPX: A cure for performance impaired parallel applications | github.com/STEllAR-GROUP/hpx | Buildbot: http://rostam.cct.lsu.edu/ | Log: http://irclog.cct.lsu.edu/
<K-ballo>
shuffle will try to swap values around, and since those rows are rvalues that requires a special rvalue swap
<K-ballo>
(we have a similar one for util::tuple in HPX, a total hack, but such is life)
EverYoun_ has joined #ste||ar
<K-ballo>
ForwardIterator requires addressof(*a) == addressof(*b) if a == b
<hkaiser>
K-ballo: it's not a real reference but a proxy to a real reference
<K-ballo>
hkaiser: is that supposed to make it better..?
<hkaiser>
no
<K-ballo>
heh
<hkaiser>
same problem as we've had with zip_iterator, remember?
<K-ballo>
yep, I mention our util::tuple hack above
<hkaiser>
ahh ok, missed that
EverYoung has quit [Ping timeout: 252 seconds]
EverYoun_ has quit [Ping timeout: 240 seconds]
<hkaiser>
yah, our own rvalue swap might be needed
<parsa[w]>
so dereference() is supposed to return a Row<T>&?
<hkaiser>
parsa[w]: why blaze::swap? and not just swap?
<parsa[w]>
does it actually matter here?
<hkaiser>
absolutely!
<parsa>
trying it right now… why though?
<zao>
One core difference in general is that ADL only kicks in for unqualified uses.
<hkaiser>
are you sure there actually exists a blaze::swap?
<parsa>
hkaiser: yes, that's the reason i chose it
<hkaiser>
where is it?
<parsa>
wanted to make sure some random swap isn't picked up
ct_ has joined #ste||ar
EverYoung has joined #ste||ar
<parsa>
apparently there isn't any :|
<hkaiser>
see
<zao>
Ooh, I got an idea. I should Wireshark my hello worlds.
<hkaiser>
uhhm we're doomed ;)
<parsa>
hkaiser: swap without blaze:: failed too
<zao>
Well, locality 1 suffers some sort of access violation and terminates because of it. I want to know at what stage that happens, and looking at the communication with locality 0 ought to give _some_ information on that, surely?
<hkaiser>
parsa: let's look tomorrow
<hkaiser>
zao: nod
<hkaiser>
you could add --hpx:debug-hpx-log=file, but that might change timings considerably and th eproblem might go away
<zao>
By the way, I'm on the road until sunday evening, so I may respond slowly to comms until then.
<hkaiser>
safe travels
EverYoung has quit [Remote host closed the connection]
parsa has quit [Quit: Zzzzzzzzzzzz]
EverYoung has joined #ste||ar
parsa has joined #ste||ar
parsa has quit [Ping timeout: 276 seconds]
ct_ has quit [Read error: Connection reset by peer]
ct_ has joined #ste||ar
EverYoung has quit [Remote host closed the connection]
EverYoung has joined #ste||ar
diehlpk has joined #ste||ar
ct_ has quit [Quit: Leaving]
EverYoung has quit [Ping timeout: 245 seconds]
diehlpk has quit [Ping timeout: 260 seconds]
EverYoung has joined #ste||ar
EverYoung has quit [Ping timeout: 252 seconds]
EverYoung has joined #ste||ar
EverYoung has quit [Ping timeout: 240 seconds]
hkaiser has quit [Quit: bye]
K-ballo has quit [Quit: K-ballo]
parsa has joined #ste||ar
vamatya has quit [Ping timeout: 248 seconds]
EverYoung has joined #ste||ar
vamatya has joined #ste||ar
wash[m] has quit [Read error: Connection reset by peer]
auviga has quit [Ping timeout: 256 seconds]
EverYoung has quit [Ping timeout: 245 seconds]
wash[m] has joined #ste||ar
parsa has quit [Quit: Zzzzzzzzzzzz]
auviga has joined #ste||ar
<github>
[hpx] cogle opened pull request #3210: Adapted parallel::{search | search_n} for Ranges TS (see #1668) (master...search_update) https://git.io/vAyIp
nanashi55 has quit [Ping timeout: 252 seconds]
nanashi55 has joined #ste||ar
vamatya has quit [Ping timeout: 252 seconds]
EverYoung has joined #ste||ar
EverYoung has quit [Ping timeout: 276 seconds]
nikunj has joined #ste||ar
jaafar has quit [Ping timeout: 268 seconds]
CaptainRubik has joined #ste||ar
CaptainRubik has quit [Ping timeout: 260 seconds]
david_pfander has joined #ste||ar
anushi has quit [Remote host closed the connection]
Anushi1998 has joined #ste||ar
<github>
[hpx] msimberg pushed 1 new commit to fixing_3182: https://git.io/vAyWw
<github>
hpx/fixing_3182 e14f560 Mikael Simberg: Add some missing when_all includes to tests
EverYoung has joined #ste||ar
david_pfander1 has joined #ste||ar
EverYoung has quit [Ping timeout: 276 seconds]
david_pfander1 has quit [Ping timeout: 256 seconds]
<jbjnr>
zao: diehlpk_work heller_ thanks for the thanks. Having a nice day off today.
<heller_>
jbjnr: enjoy!
nikunj has quit [Ping timeout: 260 seconds]
<taeguk>
jbjnr: Very congratulations for passing PhD defence :)
<zao>
We had a dude at the CS department that were going to have his licentiate defence the other day. One of his committee members had her flight from Stockholm delayed repeatedly.
<zao>
Think they got the show on the road four hours after the intended start time.
mcopik has joined #ste||ar
nikunj has joined #ste||ar
hkaiser has joined #ste||ar
K-ballo has joined #ste||ar
vsc20453 has joined #ste||ar
<vsc20453>
Hi, I'm trying to use (local) channels, but am a bit confused about blocking behaviour
<hkaiser>
vsc20453: ok, channels shouldn't block if you don't ask them to
<vsc20453>
hkaiser: that's what I thought, since they always return a future, right?
<hkaiser>
right
<hkaiser>
both, channel.get and channel.set return a future
<vsc20453>
still, my thread seems to block at some_channel.get(t) (for t=0, if that matters). Are channels freely copyable & assignable? (seemed to work in a small test code..)
<hkaiser>
yes
<hkaiser>
do you have some code that demonstrates the issue?
<hkaiser>
code we can look at, that is?
<vsc20453>
yes, but it's ~200 lines, so not reall ya minimal test case
<hkaiser>
vsc20453: could you try to minimize it?
<vsc20453>
I'll try to come up with a minimal version...
<hkaiser>
great, thanks
<zao>
200 lines isn't _too_ bad for a first glance either, assuming it's in one TU.
EverYoung has joined #ste||ar
<hkaiser>
indeed
<K-ballo>
channels are copyable?
<vsc20453>
by the way: what's the recommended way to add debug prints to code? I have the impression that hpx::cout output sometimes gets swallowed (especially when things don't work as they should), whereas std::cout does make it through. Is there any problem with using std::cout for quick and dirty debugging?
EverYoung has quit [Ping timeout: 276 seconds]
<hkaiser>
vsc20453: shouldn't be a problem
<hkaiser>
std::cout goes through the kernel and might block the current hpx-thread while doing IO, for debug this is not an issue
<hkaiser>
hpx::cout sends things to a special thread that then calls std::cout, so things may get not flushed in tim eif something goes wrong
<zao>
Be on a cool OS, and OutputDebugStringA :P
<hkaiser>
lol
<hkaiser>
zao: have they converted you now?
nikunj has quit [Ping timeout: 260 seconds]
hkaiser has quit [Quit: bye]
vsc20453 has left #ste||ar ["ERC (IRC client for Emacs 25.3.1)"]
<galabc>
heller_ or hkaiser could I ask you two questions concerning the HPX smart executors?
<galabc>
I have two ideas for a Google summer of code project
<hkaiser>
galabc: ok
<galabc>
first of, I wonder if the chunk size variables and prefetching distance are continuous variables
<galabc>
for instance can you have a chunk size of 33.3456%
<hkaiser>
in principle yes
<galabc>
or it is restricted to certain values
<hkaiser>
the chunksize is probably a integral value (i.e. 5 iterations to run one the same hpx thread)
<hkaiser>
there can't be 5.3 iterations
parsa has joined #ste||ar
<galabc>
ok so I dont think its usefull if the regression outputs continuous chunk size
<galabc>
since I would need to transform the floating point into a integer
<galabc>
I assume its the same thing for prefetching distance
<galabc>
My other questions was that while I was making research, I've read that multinomial regressions are mostly used to approximate nominal variables
<galabc>
But In the case of the chunk size and prefetching distance, the variables are ordinal
parsa has quit [Quit: Zzzzzzzzzzzz]
<galabc>
I've read that a ordinal logistic regression can be used to find the class of a ordinal variable
<galabc>
hkaiser have you ever heard of that?
<galabc>
Maybe that would be a better direction for the project since i'm not quite sure chunk size and prefetching distance can be treated as continuous variables
<hkaiser>
galabc: I have not heard about this, but I'm by no means a specialist here
<hkaiser>
actually, I don't know anything, mostly ;)
<diehlpk_work>
galabc, I think you can use the cardinal variant
<diehlpk_work>
Have you checked the hyper parameters?
<diehlpk_work>
And assumptions on the function?
jaafar has joined #ste||ar
<diehlpk_work>
Some machine learning techniques assume smoothness in some sense
jakub_golinowski has joined #ste||ar
<galabc>
diehlpk_work, I will have to check was is the cardinal variant
<galabc>
All I know from ordinal logistic regression is that its a classification algorithm used when the classes can be ordered (for example 1,2,3,4..)
<galabc>
I think its simply a variant of the multinomial logistic regression
<diehlpk_work>
Ok, please check this
mbremer has joined #ste||ar
<galabc>
I wonder what do you mean by cardinal variant
<galabc>
Is that another type of regression?
<mbremer>
@hkaiser: Do you have some time to chat next week? I'm finishing up serializing those classes, and wanted to discuss migration details
<diehlpk_work>
I menat ordinal regression
<diehlpk_work>
galabc, Just wanted to say that you should check the requirements for the regression you suggest
<galabc>
Oh I understand
<galabc>
I will do this
<diehlpk_work>
I learned yesterday that one have to carefully check the hyper parameters and the the assumptions on the residual function
<diehlpk_work>
Was discussing with one who did his PhD in machine learning about how to use the different techniques
<galabc>
Its based on the proportionnal odds assumption
<galabc>
So maybe it wont work I have to verify
<diehlpk_work>
Sure, please check and may play around with easy data
<diehlpk_work>
Generate a data set where you know what the minima is and use this data to predict the minimia
EverYoung has joined #ste||ar
jakub_golinowski has quit [Ping timeout: 256 seconds]
<galabc>
hkaiser here is my second question
<galabc>
From what zhara told me
<galabc>
For each of the training examples, you try each execution policy and use the one with minimal execution time
<hkaiser>
yes
<galabc>
You then use this value as a 'target value' for the training of the regression
<galabc>
But I don't think those values are stored in a file
<galabc>
there is a input.dar with the input data
<hkaiser>
it has to be stored somewhere, though
<galabc>
input.dat ***
<hkaiser>
she stored the koefficients for the regression
<hkaiser>
I think
<galabc>
Yes she did
<galabc>
but if I want to train my own regression in python for example
<galabc>
I need the input.dat file which I can access
<galabc>
But I also need the target values
<hkaiser>
ok
<galabc>
And I think she only stored the weights
<hkaiser>
did you ask her where this data is?
<hkaiser>
don't think she has thrown it away
<galabc>
I will ask her again more precisely
<galabc>
If its stored, it will make my training in python much more straight-forward
eschnett_ has joined #ste||ar
eschnett has quit [Ping timeout: 265 seconds]
<galabc>
The question as been asked but I don't know how long it will take