# How to implement element-wise 1D interpolation in Tensorflow?

## Issue

I would like to apply 1D interploation to each element of a tensor in Tensorflow.

For example, if it is a matrix, we can use `interp1d`.

``````from scipy.interpolate import interp1d
q = np.array([[2, 3], [5, 6]])   # query
x = [1, 3, 5, 7, 9]              # profile x
y = [3, 4, 5, 6, 7]              # profile y
fn = interp1d(x, y)
# fn(q) == [[ 3.5, 4.], [5., 5.5]]
``````

If we have a tensor `q`,

``````q = tf.placeholder(shape=[2,2], dtype=tf.float32)
``````

How can I have equivalent element-wise 1D interpolation?
Could anyone help?

## Solution

I am using a wrapper for this:

``````import numpy as np
import tensorflow as tf
from scipy.interpolate import interp1d

x = [1, 3, 5, 7, 9]
y = [3, 4, 5, 6, 7]
intFn = interp1d(x, y)

def fn(m):
return intFn(m).astype(np.float32)

q = tf.placeholder(shape=[2,2], dtype=tf.float32)
q1 = np.array([[2, 3], [5, 6]]).astype(np.float32)

f1 = tf.py_func(fn, [q], tf.float32)

with tf.Session() as sess:
init = tf.global_variables_initializer()
sess.run(init)
result = sess.run(f1, feed_dict={q:q1})

print(result)
``````

Not the best solution. Hoping that tensor flow will implement more of the functionality within numpy and scipy …

# EDIT:

I have written a simple tensorflow function that might be useful. Unfortunately, this is will only do one value at a time. However, if it is interesting, this might be something that might be improved upon …

``````def interpolate( dx_T, dy_T, x, name='interpolate' ):

with tf.variable_scope(name):

with tf.variable_scope('neighbors'):

delVals = dx_T - x
ind_1   = tf.argmax(tf.sign( delVals ))
ind_0   = ind_1 - 1

with tf.variable_scope('calculation'):

value   = tf.cond( x <= dx_T,
lambda : dy_T[:1],
lambda : tf.cond(
x >= dx_T[-1],
lambda : dy_T[-1:],
lambda : (dy_T[ind_0] +                \
(dy_T[ind_1] - dy_T[ind_0])  \
*(x-dx_T[ind_0])/            \
(dx_T[ind_1]-dx_T[ind_0]))
))

result = tf.multiply(value, 1, name='y')

return result
``````

This creates a resultant tensor, given a couple of tensors. Here is an example implementation. First create a graph …

``````tf.reset_default_graph()
with tf.variable_scope('inputs'):
dx_T = tf.placeholder(dtype=tf.float32, shape=(None,), name='dx')
dy_T = tf.placeholder(dtype=tf.float32, shape=(None,), name='dy')
x_T  = tf.placeholder(dtype=tf.float32, shape=(1,), name='inpValue')

y_T  = interpolate( dx_T, dy_T, x_T, name='interpolate' )
init = tf.global_variables_initializer()
``````

Now you can use it like so:

``````x = [1, 3, 5, 7, 9]              # profile x
y = [3, 4, 5, 6, 7]              # profile y
q = np.array([[2, 3], [5, 6]])

with tf.Session() as sess:
sess.run(init)

for i in q.flatten():
result = sess.run(y_T,
feed_dict={
'inputs/dx:0'       : x,
'inputs/dy:0'       : y,
'inputs/inpValue:0' : np.array([i])
})

print('{:6.3f} -> {}'.format(i, result))
``````

And you will get the desired result … 