hkaiser changed the topic of #ste||ar to: STE||AR: Systems Technology, Emergent Parallelism, and Algorithm Research | stellar.cct.lsu.edu | HPX: A cure for performance impaired parallel applications | github.com/STEllAR-GROUP/hpx | Buildbot: http://rostam.cct.lsu.edu/ | Log: http://irclog.cct.lsu.edu/
K-ballo has quit [Quit: K-ballo]
hkaiser has quit [Quit: bye]
<simbergm>
akheir1: once you're there, is there something wrong with the boost 1.69 installation? hpx_clang_7_boost_1_69_centos_x86_64_release is the only configuration that can't find boost
K-ballo has joined #ste||ar
hkaiser has joined #ste||ar
jaafar has joined #ste||ar
jaafar has quit [Ping timeout: 246 seconds]
K-ballo has quit [Quit: K-ballo]
K-ballo has joined #ste||ar
eschnett has joined #ste||ar
bita has joined #ste||ar
jaafar has joined #ste||ar
quaz0r has quit [Ping timeout: 246 seconds]
quaz0r has joined #ste||ar
jaafar has quit [Ping timeout: 246 seconds]
jaafar has joined #ste||ar
eschnett has quit [Quit: eschnett]
jaafar has quit [Ping timeout: 268 seconds]
<bita>
@hkaiser, do you have time to look at a file?
<bita>
This is how I think we should write conv1d in phylanx backend, it works when it is not lazy, but when lazy, variable does not have shape
<hkaiser>
how can I reproduce the problem?
<bita>
besides, we cannot slice it. We have the same problem with pool2d, but we couldn't show it, because pool2d needed a 4d array. But here, conv1d needs a 3d
<bita>
please run test1_eager, then run test2_lazy
<bita>
That is really similar to what tensorflow does it, but it is pure python. I don't know anything more :"/
<hkaiser>
k
<bita>
thanks
<hkaiser>
the conv1d in test2_lazy.py requires for the shape to be eagerly provided
<hkaiser>
bita: ^^
<hkaiser>
we would have to move all of this into the conv1d_eager function in order for it to be lazy
<bita>
I think it is possible to do that
<hkaiser>
but that's again a matter of having batches available
<hkaiser>
the image = x[i,:,:] would have to be rewritten to work with this
<bita>
how is that a problem? batches are available when conv1d is called
<hkaiser>
I meant 'batches' in the sense of us 'faking' higher dimensions
<bita>
Okay, got it
<hkaiser>
x is just a list of 2d images, right?
<bita>
I was going to say the same thing, whether we move it or not, we need append and stack to support 4d and 5d
<hkaiser>
:D
<hkaiser>
I knew you'd say that
<bita>
yeah, x can be seen as a list of 2d images, here it actually is batches, a vector and channels (these are not images, these are sequences like a signal)
<bita>
:D
<hkaiser>
nod
<hkaiser>
so it's actually a 2d array of 1d data
<bita>
exactly
<hkaiser>
bita I can add support to variable() that extracts the shape from its initial value, if that's a literal value
<hkaiser>
and force evaluation otherwise
<hkaiser>
not sure however if that's what we want
<bita>
in this case, if we move everything to eager side it will work I guess without extra evaluation
<hkaiser>
in principle yes, the question is if we support all of the required functionalities
<bita>
uhum
<hkaiser>
bita: especially the image[x,:,:] would have to be rewritten, I think
<bita>
got it
<hkaiser>
also, remember our append() is not an in-place operation
<hkaiser>
you need to write x = append(x, ...)
<bita>
if I had the result_length I could have rewritten it as: Z = np.zeros((batch,res_length,channels_out))
<bita>
so we can write them with append and stack, or we can write them with maybe a new slicing operator
<hkaiser>
k
<hkaiser>
so what would you like me to do?
<bita>
there is no difference for me, please write it however you see fit
<hkaiser>
bita: write what?
<bita>
you said image[x,:,:] should be rewritten. I thought it should be rewritten in the variable part of code, or it should be supported differently. How can I rewritre it?
bita_ has joined #ste||ar
<hkaiser>
bita_: let me think
<bita_>
sure, thanks
bita has quit [Quit: Leaving]
<hkaiser>
bita_: should we start with deciding how to represent the data we're dealing with?
<hkaiser>
bita_: in this case the 3D data is actually a 2d array of vectors, in other cases 3d data could be a 1d array of images, etc.
<hkaiser>
so should we always represent 3d data using our tensor implementation, regardless of its conceptual structure or should we represent the conceptual structure directly?
<hkaiser>
I have the impression that because of numpy's flexibility people do not care about the conceptual structure everywhere, it's just 1,2,3,4, or 5d 'stuff'
<hkaiser>
its only the algorithms like conv or pool that impose a certain meaning to the dimensions of the data
<hkaiser>
other functions, like parse_shape_or_val are agnostic
<hkaiser>
bita_: I just realized that you already started to implement conv1d on the eager side of things: np_conv1d
<bita_>
hkaiser, 3D data is always a 2D array of vectors (4D is a 2D array of images which are always matrices)
<hkaiser>
ahh ok - good to know
<hkaiser>
that may simplify things
<bita_>
>>other functions, like parse_shape_or_val are agnostic. - that is true, only when we have training we add two dimensions