无法在 307200 字节的 TensorFlowLite 缓冲区和 270000 字节的 Java 缓冲区之间进行转换 [英] Cannot convert between a TensorFlowLite buffer with 307200 bytes and a Java Buffer with 270000 bytes

查看:120
本文介绍了无法在 307200 字节的 TensorFlowLite 缓冲区和 270000 字节的 Java 缓冲区之间进行转换的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我正在尝试从

<块引用>

标签长度需要与您的输出张量长度匹配训练好的模型.

 int[] 维度 = new int[4];尺寸[0] = 1;//Batch_size//一次的帧数尺寸[1] = 224;//模型所需的图像宽度尺寸[2] = 224;//模型需要的图像高度维度[3] = 3;//像素数张量张量 = c.tfLite.getInputTensor(0);c.tfLite.resizeInput(0, 维度);张量 tensor1 = c.tfLite.getInputTensor(0);

更改输入大小

I am trying to run a pre-trained Object Detection TensorFlowLite model from Tensorflow detection model zoo. I used the ssd_mobilenet_v3_small_coco model from this site under the Mobile Models heading. According to the instructions under Running our model on Android, I commented out the model download script to avoid the assets being overwritten: // apply from:'download_model.gradle' in build.gradle file and replaced the detect.tflite and labelmap.txt file in assets directory. Build was successful without any errors and the app was installed in my android device but it crashed as soon as it launched and the logcat showed:

E/AndroidRuntime: FATAL EXCEPTION: inference
Process: org.tensorflow.lite.examples.detection, PID: 16960
java.lang.IllegalArgumentException: Cannot convert between a TensorFlowLite buffer with 307200 bytes and a Java Buffer with 270000 bytes.
    at org.tensorflow.lite.Tensor.throwIfShapeIsIncompatible(Tensor.java:425)
    at org.tensorflow.lite.Tensor.throwIfDataIsIncompatible(Tensor.java:392)
    at org.tensorflow.lite.Tensor.setTo(Tensor.java:188)
    at org.tensorflow.lite.NativeInterpreterWrapper.run(NativeInterpreterWrapper.java:150)
    at org.tensorflow.lite.Interpreter.runForMultipleInputsOutputs(Interpreter.java:314)
    at org.tensorflow.lite.examples.detection.tflite.TFLiteObjectDetectionAPIModel.recognizeImage(TFLiteObjectDetectionAPIModel.java:196)
    at org.tensorflow.lite.examples.detection.DetectorActivity$2.run(DetectorActivity.java:185)
    at android.os.Handler.handleCallback(Handler.java:873)
    at android.os.Handler.dispatchMessage(Handler.java:99)
    at android.os.Looper.loop(Looper.java:201)
    at android.os.HandlerThread.run(HandlerThread.java:65)

I have searched through many TensorFlowLite documentations but did not find anything related to this error and I found some questions on stackoverflow having same error message but for a custom trained model, so that did not help. The same error keeps on coming even on a custom trained model. What should I do to eliminate this error?

解决方案

You should resize your input tensors, so your model can take data of any size, pixels or batches.

The below code is for image classification and yours is object detection: TFLiteObjectDetectionAPIModel is responsible to get size. Try to manipulate the size in some where TFLiteObjectDetectionAPIModel.

The labels length needs to be match the output tensor length for your trained model.

  int[] dimensions = new int[4];
  dimensions[0] = 1; // Batch_size // No of frames at a time
  dimensions[1] = 224; // Image Width required by model
  dimensions[2] = 224; // Image Height required by model
  dimensions[3] = 3; // No of Pixels
  Tensor tensor = c.tfLite.getInputTensor(0);
  c.tfLite.resizeInput(0, dimensions);
  Tensor tensor1 = c.tfLite.getInputTensor(0);

Change input size

这篇关于无法在 307200 字节的 TensorFlowLite 缓冲区和 270000 字节的 Java 缓冲区之间进行转换的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆