pytesseract无法按预期识别文本? [英] pytesseract not recognizing text as expected?

查看:117
本文介绍了pytesseract无法按预期识别文本?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我正在尝试通过opencv和pytesseract运行一个简单的车牌图像以获取文本,但是我很难从中获取任何信息.在这里按照教程进行操作:

解决方案

您可以尝试其他 psm 配置:

  text = pytesseract.image_to_string(Cropped,config ='-psm 3') 

输出为:检测到的数字为:PHR.26.BR 9044; .

Tesseract手册页:

  0 =仅定向和脚本检测(OSD).1 =使用OSD进行自动页面分割.2 =自动页面分割,但没有OSD或OCR.(未实现)3 =全自动页面分割,但没有OSD.(默认)4 =假设一列可变大小的文本.5 =假设单个统一的垂直对齐文本块.6 =假设一个统一的文本块.7 =将图像视为单个文本行.8 =将图像视为一个单词.9 =将图像视为一个单词围成一个圆圈.10 =将图像视为单个字符.11 =稀疏文本.以不特定的顺序查找尽可能多的文本.12 =带有OSD的稀疏文本.13 =原始行.将图像视为单个文本行,绕开特定于Tesseract的骇客.我不知道为什么`psm``11`没有给出任何输出... 

psm 11似乎尚未发布(或者它太新了):

tesseract-ocr :

  void PrintHelpForPSM(){const char * msg =页面细分模式:\ n"仅0方向和脚本检测(OSD).\ n""1使用OSD进行自动页面分割.\ n""2自动页面分割,但没有OSD或OCR.\ n""3全自动页面分割,但没有OSD.(默认)\ n""4假设一列可变大小的文本.\ n""5假定垂直对齐的文本是一个统一的块.\ n""6假设一个统一的文本块.\ n""7将图像视为单个文本行.\ n""8将图像视为一个单词.\ n""9将图像当作一个单词圈起来.\ n""10将图像视为一个字符.\ n"//TODO:考虑发布这些模式.#if 0"11个稀疏文本.如果没有,则找到尽可能多的文本"特定订单.\ n""12个带有OSD的稀疏文本.\ n""13原始行.将图像视为单个文本行,\ n""\ t \ t \ t绕过特定于Tesseract的黑客.\ n"#万一; 

I am trying to run a simple license plate image through opencv and pytesseract to get the text but I am having trouble getting anything out of it. Following the tutorial here:

https://circuitdigest.com/microcontroller-projects/license-plate-recognition-using-raspberry-pi-and-opencv

I'm running on a macbook with everything installed in anaconda and no errors as far as I see, but when I run my code I get the cropped image but no detected number:

(computer_vision) mac@x86_64-apple-darwin13 lpr % python explore.py
Detected Number is: 

The code is below:

import cv2
import numpy as np
import imutils
import pytesseract

img = cv2.imread('plate1.jpg')
img = cv2.resize(img, (620,480))
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY) #convert to grey scale
gray = cv2.bilateralFilter(gray, 11, 17, 17)
edged = cv2.Canny(gray, 30, 200) #Perform Edge detection

cnts = cv2.findContours(edged.copy(), cv2.RETR_TREE, cv2.CHAIN_APPROX_SIMPLE)
cnts = imutils.grab_contours(cnts)
cnts = sorted(cnts, key = cv2.contourArea, reverse = True)[:10]
screenCnt = None

# loop over our contours
for c in cnts:
                # approximate the contour
                peri = cv2.arcLength(c, True)
                approx = cv2.approxPolyDP(c, 0.018 * peri, True)
                # if our approximated contour has four points, then
                # we can assume that we have found our screen
                if len(approx) == 4:
                      screenCnt = approx
                      break

# Masking the part other than the number plate
mask = np.zeros(gray.shape,np.uint8)
new_image = cv2.drawContours(mask,[screenCnt],0,255,-1,)
new_image = cv2.bitwise_and(img,img,mask=mask)

# Now crop
(x, y) = np.where(mask == 255)
(topx, topy) = (np.min(x), np.min(y))
(bottomx, bottomy) = (np.max(x), np.max(y))
Cropped = gray[topx:bottomx+1, topy:bottomy+1]

#Read the number plate
text = pytesseract.image_to_string(Cropped, config='--psm 11')
print("Detected Number is:",text)


cv2.imshow('image',Cropped)
cv2.waitKey(0)
cv2.destroyAllWindows()

Base image is here:

解决方案

You may try different psm configuration:

text = pytesseract.image_to_string(Cropped, config='--psm 3')

Output is: Detected Number is: PHR. 26.BR 9044;.

Tesseract manual page:

0 = Orientation and script detection (OSD) only.
1 = Automatic page segmentation with OSD.
2 = Automatic page segmentation, but no OSD, or OCR. (not implemented)
3 = Fully automatic page segmentation, but no OSD. (Default)
4 = Assume a single column of text of variable sizes.
5 = Assume a single uniform block of vertically aligned text.
6 = Assume a single uniform block of text.
7 = Treat the image as a single text line.
8 = Treat the image as a single word.
9 = Treat the image as a single word in a circle.
10 = Treat the image as a single character.
11 = Sparse text. Find as much text as possible in no particular order.
12 = Sparse text with OSD.
13 = Raw line. Treat the image as a single text line,
     bypassing hacks that are Tesseract-specific.
I don't know why `psm` `11` is not giving any output...  

It looks like psm 11 is not yet published (or it's too new):

tesseract-ocr:

void PrintHelpForPSM() {
  const char* msg =
      "Page segmentation modes:\n"
        "  0    Orientation and script detection (OSD) only.\n"
        "  1    Automatic page segmentation with OSD.\n"
        "  2    Automatic page segmentation, but no OSD, or OCR.\n"
        "  3    Fully automatic page segmentation, but no OSD. (Default)\n"
        "  4    Assume a single column of text of variable sizes.\n"
        "  5    Assume a single uniform block of vertically aligned text.\n"
        "  6    Assume a single uniform block of text.\n"
        "  7    Treat the image as a single text line.\n"
        "  8    Treat the image as a single word.\n"
        "  9    Treat the image as a single word in a circle.\n"
        " 10    Treat the image as a single character.\n"

        //TODO: Consider publishing these modes.
        #if 0
        " 11    Sparse text. Find as much text as possible in no"
          " particular order.\n"
        " 12    Sparse text with OSD.\n"
        " 13    Raw line. Treat the image as a single text line,\n"
          "\t\t\tbypassing hacks that are Tesseract-specific.\n"
        #endif
        ;

这篇关于pytesseract无法按预期识别文本?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆