ValueError:仅在第一个维度中不支持.张量'flatbuffer_data'具有无效的形状'[None,None,1,512]' [英] ValueError: None is only supported in the 1st dimension. Tensor 'flatbuffer_data' has invalid shape '[None, None, 1, 512]'

查看:679
本文介绍了ValueError:仅在第一个维度中不支持.张量'flatbuffer_data'具有无效的形状'[None,None,1,512]'的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我正在尝试将我的tensorflow模型(2.0)转换为tensorflow lite格式.我的模型有两个输入层,如下所示:

I am trying to convert my tensorflow model (2.0) into tensorflow lite format. My model has two input layers as follows:

import tensorflow as tf
from tensorflow import keras
from tensorflow.keras.models import load_model
from tensorflow.keras.layers import Lambda, Input, add, Dot, multiply, dot 
from tensorflow.keras.backend import dot, transpose, expand_dims
from tensorflow.keras.models import Model

r1 = Input(shape=[None, 1, 512], name='flatbuffer_data') # I want to take a variable amount of 
# 512 float embeddings from my flatbuffer, if the flatbuffer has 4, embeddings then it would
# be inferred as shape=[4, 1, 512], if it has a 100 embeddings, then it is [100, 1, 512].
r2 = Input(shape=[1, 512], name='query_embedding')

#Example code

minus_r1 = Lambda(lambda x: -x, name='invert_value')(r1)
subtracted = add([r2, minus_r1], name='embeddings_diff')

out1 = tf.argsort(subtracted)
out2 = tf.sort(subtracted)

model = Model([r1, r2], [out1, out2])

然后我在图层上进行一些张量运算并按如下方式保存模型(没有训练,因此没有可训练的参数,只是一些我想移植到android的线性代数运算)

I am then doing some tensor operations on the layers and saving the models as follows (there is no training and hence no trainable parameters, just some linear algebra ops which I want to port to android)

model.save('combined_model.h5')

我得到了我的tensorflow .h5模型,但是当我尝试将其转换为tensorflow lite时,出现了以下错误:

I get my tensorflow .h5 model , thus but then when I try to convert it into, tensorflow lite, I get the following error:

import tensorflow as tf
model = tf.keras.models.load_model('combined_model.h5')
converter = tf.lite.TFLiteConverter.from_keras_model(model)
tflite_model = converter.convert()

#Error
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "/home/aspiring1/.virtualenvs/faiss/lib/python3.6/site-packages/tensorflow_core/lite/python/lite.py", line 446, in convert
    "invalid shape '{1}'.".format(_get_tensor_name(tensor), shape_list))
ValueError: None is only supported in the 1st dimension. Tensor 'flatbuffer_data' has invalid shape '[None, None, 1, 512]'.

我知道我们使用tensorflow占位符在tensorflow 1.x中进行了动态和静态形状推断.张量流2.x中是否有类似的东西?另外,我也非常感谢tensorflow 1.x中的解决方案.

I know that we had dynamic and static shape inference in tensorflow 1.x using tensorflow placeholders. Is there an analogue here in tensorflow 2.x. Also, I'd appreciate a solution in tensorflow 1.x too.

我读过的一些答案和博客可能会有所帮助: Tensorflow:如何保存/恢复模型?

Some answers and blogs I've read that might help: Tensorflow: how to save/restore a model?

了解张量流中的动态和静态形状

了解张量流形状

使用上面的第一个链接,我还尝试创建了一个tensorflow 1.x图,并尝试使用saved model格式保存它,但是我没有得到想要的结果.

Using the first link above I also tried creating a tensorflow 1.x graph and tried saving it using the saved model format, but I don't get the desired results.

您可以在此处找到相同的代码: tensorflow 1.x要点代码

You can find my code for the same here: tensorflow 1.x gist code

推荐答案

完整代码: https://drive.google.com/file/d/1MN4-FX_-hz3y-UAuf7OTj_XYuVTlsSTP/view?usp=sharing

我知道我们使用tensorflow占位符在tensorflow 1.x中进行了动态和静态形状推断.张量流2.x中是否有类似物

I know that we had dynamic and static shape inference in tensorflow 1.x using tensorflow placeholders. Is there an analogue here in tensorflow 2.x

一切仍然正常.我认为问题在于tf.lite无法处理动态形状.我认为它会一次重新分配所有张量,然后重新使用它们(我可能是错的).

That all still works fine. I think the problem is that tf.lite doesn't handle dynamic shapes. I think it prealocates all it's tensors, once and re-uses them (I could be wrong).

因此,首先要考虑额外的维度:

So, first of all that extra dimension:

[None, None, 1, 512]

keras.Input始终包含一个批处理维度,tf.lite可以处理该维度(未知在tf-nightly中似乎放宽了).

keras.Input always includes a batch dimension, which tf.lite can handle being unknown (this restriction seems relaxed in tf-nightly).

但是lite似乎更喜欢批处理尺寸为1.如果切换到:

But lite seems to prefer a batch dimension of 1. If you switch to:

r1 = Input(shape=[4], batch_size=None, name='flatbuffer_data')
r2 = Input(shape=[4], batch_size=1, name='query_embedding')

通过转换,但在尝试执行tflite模型时仍然失败,因为该模型希望所有未知尺寸为1:

Passes the conversion, but still fails when you try to execute the tflite model, because the model wants all unknown dimensions to be 1:

converter = tf.lite.TFLiteConverter.from_keras_model(model)
tflite_model = converter.convert()

i = tf.lite.Interpreter(model_content=tflite_model)
i.allocate_tensors()
i.get_input_details()

i.set_tensor(0, tf.constant([[0.,0,0,0],[1,1,1,1],[2,2,2,2]]))
i.set_tensor(1, tf.constant([[0.,0,0,0]]))

ValueError: Cannot set tensor: Dimension mismatch. Got 3 but expected 1 for dimension 0 of input 0.

使用tf-nightly,您可以在编写模型时对其进行转换,但是由于未知尺寸假定为1,所以该模型也无法运行.

With tf-nightly you can convert the model as you've written it, but that also fails to run since the unknown dimension assumed to be 1:

r1 = Input(shape=[None, 4], name='flatbuffer_data') 
r2 = Input(shape=[1, 4], name='query_embedding')

...

import tensorflow as tf
converter = tf.lite.TFLiteConverter.from_keras_model(model)
tflite_model = converter.convert()

i = tf.lite.Interpreter(model_content=tflite_model)
i.allocate_tensors()
print(i.get_input_details())

i.set_tensor(0, tf.constant([[[0.,0,0,0],[1,1,1,1],[2,2,2,2]]]))
i.set_tensor(1, tf.constant([[[0.,0,0,0]]]))

ValueError: Cannot set tensor: Dimension mismatch. Got 3 but expected 1 for dimension 1 of input 0.

解决方案?不,差不多.

我认为您需要给该数组一个比您期望的数组大的大小,并传递一个int告诉您的模型要切出多少个元素:

Solution? No. Almost.

I think you need to give that array a size larger than you expect it to be, and pass an int telling your model how many elements to slice out:

n = Input(shape=(), dtype=tf.int32, name='num_inputs')
r1 = Input(shape=[1000, 4], name='flatbuffer_data')
r2 = Input(shape=[4], name='query_embedding')

#Example code
x = tf.reshape(r1, [1000,4])
x = tf.gather(x, tf.range(tf.squeeze(n)))
minus_r1 = Lambda(lambda x: -x, name='invert_value')(x)
subtracted = add([r2, minus_r1], name='embeddings_diff')

out1 = tf.argsort(subtracted, name='argsort')
out2 = tf.sort(subtracted, name="sorted")

model = Model([r1, r2, n], [out1, out2])

然后它起作用了

converter = tf.lite.TFLiteConverter.from_keras_model(model)
tflite_model = converter.convert()

i = tf.lite.Interpreter(model_content=tflite_model)
i.allocate_tensors()

for d in i.get_input_details():
  print(d)

a = np.zeros([1000, 4], dtype=np.float32)
a[:3] = [
          [0.,0,0,0],
          [1,1,1,1],
          [2,2,2,2]]

i.set_tensor(0, tf.constant(a[np.newaxis,...], dtype=tf.float32))
i.set_tensor(1, tf.constant([[0.,0,0,0]]))
i.set_tensor(2, tf.constant([3], dtype=tf.int32))

i.invoke()

print()
for d in i.get_output_details():
  print(i.get_tensor(d['index']))

[[ 0.  0.  0.  0.]
 [-1. -1. -1. -1.]
 [-2. -2. -2. -2.]]
[[0 1 2 3]
 [0 1 2 3]
 [0 1 2 3]]

OP在Java解释器中尝试了此操作,并得到了:

OP tried this in a java interpreter and got:

java.lang.IllegalArgumentException: Internal error: Failed to apply delegate: Attempting to use a delegate that only supports static-sized tensors with a graph that has dynamic-sized tensors.

所以我们还没有完成.

这篇关于ValueError:仅在第一个维度中不支持.张量'flatbuffer_data'具有无效的形状'[None,None,1,512]'的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
相关文章
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆