TFLite 演示模型上的静态图像精度低 [英] Low Accuracy with static image on TFLite demo model
问题描述
我正在尝试使用来自
这是正在分类的连续图像流.
我需要在拍摄图片而不是流后对图像进行分类,然后根据结果采取一些行动.下面是我的方法.
- 创建一个基本的相机应用
- 拍照并保存到存储区
- 保存图像的 uri,然后从URI.
- 这个drawable然后被转换为位图.
- 位图大小转换为 224 x 224 以匹配移动网络模型
- 我收到了来自代码实验室的连续流样本的 0.05 和 0.06 范围内的准确度,这在经过训练的花卉类中提供了 0.80 - 0.90 范围内的准确度
下面是我将位图转换为 224 x 224 大小的代码
private static Bitmap getResizedBitmap(Bitmap bm, int newWidth, int newHeight, boolean isNecessaryToKeepOrig) {int width = bm.getWidth();int height = bm.getHeight();float scaleWidth = ((float) newWidth)/width;浮动 scaleHeight = ((float) newHeight)/高度;//为操作创建矩阵矩阵矩阵 = 新矩阵();//调整位图大小matrix.postScale(scaleWidth, scaleHeight);//重新创建"新位图Bitmap resizedBitmap = Bitmap.createBitmap(bm, 0, 0, width, height, matrix, false);if(!isNecessaryToKeepOrig){bm.recycle();}返回调整大小的位图;}
即使我将原始位图传递给分类器,分类器本身正在将图像转换为 224 x 224,结果仍然相同.我应该对图像进行更多额外处理还是需要更改任何配置在模型中?
我认为问题在于平滑概率的 applyFilter().去掉就可以正常显示概率了.
StringclassifyFrame(Bitmap bitmap) {...//平滑结果//应用过滤器();<--删除它...}
I'm trying the TFLite implementation for Image Classification using Mobile Net Transfer Learning example from TensorFlow for Poets 2
I'm able to succesfully complete the transfer learning using the four flower samples in the code lab and got the below screen
This is a continuous stream of images that's being classified.
I need to classify the image after taking the picture instead of stream and then take some action based on the result. Below is my approach for this.
- Create a basic camera app
- Take a picture and save it to storage
- The uri of the image is saved and then a drawable is created from the URI.
- This drawable is then converted to a bitmap.
- The bitmap size transformed to 224 x 224 to match the input of the Mobile Net model
- I'm receiving the accuracy in the ranges of 0.05 and 0.06 against the continuous stream sample from the Code Labs which gives accuracy in the range of 0.80 - 0.90 in the trained flower classes
Below is the code where I transform the bitmap to 224 x 224 size
private static Bitmap getResizedBitmap(Bitmap bm, int newWidth, int newHeight, boolean isNecessaryToKeepOrig) {
int width = bm.getWidth();
int height = bm.getHeight();
float scaleWidth = ((float) newWidth) / width;
float scaleHeight = ((float) newHeight) / height;
// CREATE A MATRIX FOR THE MANIPULATION
Matrix matrix = new Matrix();
// RESIZE THE BIT MAP
matrix.postScale(scaleWidth, scaleHeight);
// "RECREATE" THE NEW BITMAP
Bitmap resizedBitmap = Bitmap.createBitmap(bm, 0, 0, width, height, matrix, false);
if(!isNecessaryToKeepOrig){
bm.recycle();
}
return resizedBitmap;
}
The results turn out to be same even when I pass down the original bitmap to classifier which itself is converting the image to 224 x 224. Should I be doing some more additional processing on the images or do I need to change any configuration in the model ?
I think the problem is applyFilter() which smooth the probability. Just remove it then the probability should be showed as normal.
String classifyFrame(Bitmap bitmap) {
...
// smooth the results
//applyFilter(); <--remove it
...
}
这篇关于TFLite 演示模型上的静态图像精度低的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!