从字符串列表创建TfRecords并在解码后在tensorflow中提供图 [英] Creating TfRecords from a list of strings and feeding a Graph in tensorflow after decoding

查看:93
本文介绍了从字符串列表创建TfRecords并在解码后在tensorflow中提供图的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

目标是创建一个TfRecords数据库. 给出:我有23个文件夹,每个文件夹包含7500张图像,以及23个文本文件,每个文件都有7500行,分别描述了7500图像在单独文件夹中的功能.

The aim was to create a database of TfRecords. Given: I have 23 folders each contain 7500 image, and 23 text file, each with 7500 line describing features for the 7500 images in separate folders.

我通过以下代码创建了数据库:

I created the database through this code:

import tensorflow as tf
import numpy as np
from PIL import Image

def _Float_feature(value):
    return tf.train.Feature(float_list=tf.train.FloatList(value=[value]))

def _bytes_feature(value):
    return tf.train.Feature(bytes_list=tf.train.BytesList(value=[value]))

def _int64_feature(value):
    return tf.train.Feature(int64_list=tf.train.Int64List(value=[value]))

def create_image_annotation_data():
    # Code to read images and features.
    # images represent a list of numpy array of images, and features_labels represent a list of strings
    # where each string represent the whole set of features for each image. 
    return images, features_labels

# This is the starting point of the program.
# Now I have the images stored as list of numpy array, and the features as list of strings.
images, annotations = create_image_annotation_data()

tfrecords_filename = "database.tfrecords"
writer = tf.python_io.TFRecordWriter(tfrecords_filename)

for img, ann in zip(images, annotations):

    # Note that the height and width are needed to reconstruct the original image.
    height = img.shape[0]
    width = img.shape[1]

    # This is how data is converted into binary
    img_raw = img.tostring()
    example = tf.train.Example(features=tf.train.Features(feature={
        'height': _int64_feature(height),
        'width': _int64_feature(width),
        'image_raw': _bytes_feature(img_raw),
        'annotation_raw': _bytes_feature(tf.compat.as_bytes(ann))
    }))

    writer.write(example.SerializeToString())

writer.close()

reconstructed_images = []

record_iterator = tf.python_io.tf_record_iterator(path=tfrecords_filename)

for string_record in record_iterator:
    example = tf.train.Example()
    example.ParseFromString(string_record)

    height = int(example.features.feature['height']
                 .int64_list
                 .value[0])

    width = int(example.features.feature['width']
                .int64_list
                .value[0])

    img_string = (example.features.feature['image_raw']
                  .bytes_list
                  .value[0])

    annotation_string = (example.features.feature['annotation_raw']
                         .bytes_list
                         .value[0])

    img_1d = np.fromstring(img_string, dtype=np.uint8)
    reconstructed_img = img_1d.reshape((height, width, -1))
    annotation_reconstructed = annotation_string.decode('utf-8')

因此,在将图像和文本转换为tfRecords并能够读取它们并将图像转换为numpy并将(二进制文本)转换为python中的字符串之后,我尝试通过使用带有读取器的filename_queue来加倍努力(目的是为图形提供一批数据,而不是一次提供和平数据;此外,目的是通过不同的线程使示例队列入队和出队,从而使网络训练更快.

Therefore, after converting images and text into tfRecords and after being able to read them and convert images into numpy and the (binary text) into string in python, I tried to go the extra mile by using a filename_queue with a reader (The purpose was to provide the graph with batch of data rather one peace of data at a time. Additionally, the aim was to enqueue and dequeue the queue of examples through different threads, therefore, making training the network faster)

因此,我使用了以下代码:

Therefore, I used the following code:

import tensorflow as tf
import numpy as np
import time

image_file_list = ["database.tfrecords"]
batch_size = 16

# Make a queue of file names including all the JPEG images files in the relative
# image directory.
filename_queue = tf.train.string_input_producer(image_file_list, num_epochs=1, shuffle=False)

reader = tf.TFRecordReader()

# Read a whole file from the queue, the first returned value in the tuple is the
# filename which we are ignoring.
_, serialized_example = reader.read(filename_queue)

features = tf.parse_single_example(
      serialized_example,
      # Defaults are not specified since both keys are required.
      features={
          'height': tf.FixedLenFeature([], tf.int64),
          'width': tf.FixedLenFeature([], tf.int64),
          'image_raw': tf.FixedLenFeature([], tf.string),
          'annotation_raw': tf.FixedLenFeature([], tf.string)
      })

image = tf.decode_raw(features['image_raw'], tf.uint8)
annotation = tf.decode_raw(features['annotation_raw'], tf.float32)

height = tf.cast(features['height'], tf.int32)
width = tf.cast(features['width'], tf.int32)

image = tf.reshape(image, [height, width, 3])

# Note that the minimum after dequeue is needed to make sure that the queue is not empty after dequeuing so that
# we don't run into errors
'''
min_after_dequeue = 100
capacity = min_after_dequeue + 3 * batch_size
ann, images_batch = tf.train.batch([annotation, image],
                                   shapes=[[1], [112, 112, 3]],
                                   batch_size=batch_size,
                                   capacity=capacity,
                                   num_threads=1)
'''

# Start a new session to show example output.
with tf.Session() as sess:
    merged = tf.summary.merge_all()
    train_writer = tf.summary.FileWriter('C:/Users/user/Documents/tensorboard_logs/New_Runs', sess.graph)

    # Required to get the filename matching to run.
    tf.global_variables_initializer().run()

    # Coordinate the loading of image files.
    coord = tf.train.Coordinator()
    threads = tf.train.start_queue_runners(coord=coord)

    for steps in range(16):
        t1 = time.time()
        annotation_string, batch, summary = sess.run([annotation, image, merged])
        t2 = time.time()
        print('time to fetch 16 faces:', (t2 - t1))
        print(annotation_string)
        tf.summary.image("image_batch", image)
        train_writer.add_summary(summary, steps)

    # Finish off the filename queue coordinator.
    coord.request_stop()
    coord.join(threads)

最后,运行上面的代码后,出现以下错误: OutOfRangeError(请参阅上面的回溯):FIFOQueue'_0_input_producer'已关闭并且元素不足(请求1,当前大小为0) [[节点:ReaderReadV2 = ReaderReadV2 [_device ="/job:localhost/副本:0/task:0/cpu:0"] [(TFRecordReaderV2,input_producer)]]

Finally, after running the above code, I got the following error: OutOfRangeError (see above for traceback): FIFOQueue '_0_input_producer' is closed and has insufficient elements (requested 1, current size 0) [[Node: ReaderReadV2 = ReaderReadV2[_device="/job:localhost/replica:0/task:0/cpu:0"](TFRecordReaderV2, input_producer)]]

另一个问题:

  1. 如何解码二进制数据库(tfrecords)以取回作为python字符串数据结构"存储的功能.
  2. 如何使用 tf.train.batch 创建一批示例以供网络使用.
  1. How to decode binary database (tfrecords) to retrieve back the features stored "as python string data structure".
  2. How to use the tf.train.batch to create a batch of examples to feed the network.

谢谢!! 非常感谢您的帮助.

Thank you!! Any help is much appreciated.

推荐答案

为解决此问题,coordinatorqueue runner都必须在Session中初始化.另外,由于纪元数是内部控制的,因此它不是global variable,而是考虑local variable.因此,我们需要在告诉queue_runner开始将file_names排队到Queue中之前,先初始化该局部变量.因此,下面是以下代码:

In order to solve this problem, the coordinator along with the queue runner both had to be initialized within a Session. Additionally, since the number of epoch is controlled internally, it is not a global variable, instead, consider a local variable. Therefore, we need to initialize that local variable before telling the queue_runner to start the enqueuing the file_names into the Queue. Therefore, here is the following code:

filename_queue = tf.train.string_input_producer(tfrecords_filename, num_epochs=num_epoch, shuffle=False, name='queue')
reader = tf.TFRecordReader()

key, serialized_example = reader.read(filename_queue)
features = tf.parse_single_example(
    serialized_example,
    # Defaults are not specified since both keys are required.
    features={
        'height': tf.FixedLenFeature([], tf.int64),
        'width': tf.FixedLenFeature([], tf.int64),
        'image_raw': tf.FixedLenFeature([], tf.string),
        'annotation_raw': tf.FixedLenFeature([], tf.string)
    })
...
init_op = tf.group(tf.local_variables_initializer(),
               tf.global_variables_initializer())
with tf.Session() as sess:
    sess.run(init_op)

    coord = tf.train.Coordinator()
    threads = tf.train.start_queue_runners(coord=coord)

现在应该可以工作了.

And now should work.

现在要收集一批图像,然后再将其输入网络,我们可以使用tf.train.shuffle_batchtf.train.batch.两者都可以.区别很简单.一个随机播放图像,另一个不随机播放.但是请注意,由于在要加入file_names的线程之间进行竞争,因此为线程定义一个数字并使用tf.train.batch可能会使数据样本混洗.无论如何,在初始化Queue之后,应如下所示直接插入以下代码:

Now to gather a batch of images before feeding them into the network, we can use tf.train.shuffle_batch or tf.train.batch. Both works. And the difference is simple. One shuffles the images and the other not. But note that, defining a number a threads and using tf.train.batch might shuffle the data samples because of the race that takes part between the threads that are enqueuing file_names. Anyways, the following code should be inserted directly after initializing the Queue as follows:

min_after_dequeue = 100
num_threads = 1
capacity = min_after_dequeue + num_threads * batch_size
label_batch, images_batch = tf.train.batch([annotation, image],
                                       shapes=[[], [112, 112, 3]],
                                       batch_size=batch_size,
                                       capacity=capacity,
                                       num_threads=num_threads)

请注意,此处tensors的形状可能有所不同.碰巧读者正在解码大小为[112, 112, 3]的彩色图像.注释中有一个[](没有理由,这是一种特殊情况).

Note that here the shape of the tensors could be different. It happened that the reader was decoding a colored image of size [112, 112, 3]. And the annotation has a [] (there is no reason, that was a particular case).

最后,我们可以将tf.string数据类型视为字符串.实际上,在评估注释张量后,我们可以意识到将张量视为binary string(这是在张量流中真正对待它的方式).因此,在我的情况下,字符串只是与该特定图像相关的一组功能.因此,为了提取特定特征,下面是一个示例:

Finally, we can treat the tf.string datatype as a string. In reality, after evaluating the annotation tensor, we can realize that the tensor is treated as a binary string (This is how it is really treated in tensorflow). Therefore, in my case that string was just a set of features related to that particular image. Therefore, in order to extract specific features, here is an example:

# The output of string_split is not a tensor, instead, it is a SparseTensorValue. Therefore, it has a property value that stores the actual values. as a tensor. 
label_batch_splitted = tf.string_split(label_batch, delimiter=', ')
label_batch_values = tf.reshape(label_batch_splitted.values, [batch_size, -1])
# string_to_number will convert the feature's numbers into float32 as I need them. 
label_batch_numbers = tf.string_to_number(label_batch_values, out_type=tf.float32)
# the tf.slice would extract the necessary feature which I am looking.
confidences = tf.slice(label_batch_numbers, begin=[0, 3], size=[-1, 1])

希望这个答案会有所帮助.

Hope this answer helps.

这篇关于从字符串列表创建TfRecords并在解码后在tensorflow中提供图的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆