将tensor转换为ctc_loss的SparseTensor [英] Converting Tensor to a SparseTensor for ctc_loss

查看:99
本文介绍了将tensor转换为ctc_loss的SparseTensor的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

是否可以将密集张量转换为稀疏张量?显然,Tensorflow的Estimator.fit不接受SparseTensors作为标签.我想将SparseTensors传递到Tensorflow的Estimator.fit的原因之一是能够使用tensorflow ctc_loss.这是代码:

Is there a way to convert a dense tensor into a sparse tensor? Apparently, Tensorflow's Estimator.fit doesn't accept SparseTensors as labels. One reason I would like to pass SparseTensors into Tensorflow's Estimator.fit is to be able to use tensorflow ctc_loss. Here's the code:

import dataset_utils
import tensorflow as tf
import numpy as np

from tensorflow.contrib import grid_rnn, learn, layers, framework

def grid_rnn_fn(features, labels, mode):
    input_layer = tf.reshape(features["x"], [-1, 48, 1596])
    indices = tf.where(tf.not_equal(labels, tf.constant(0, dtype=tf.int32)))
    values = tf.gather_nd(labels, indices)
    sparse_labels = tf.SparseTensor(indices, values, dense_shape=tf.shape(labels, out_type=tf.int64))

    cell_fw = grid_rnn.Grid2LSTMCell(num_units=128)
    cell_bw = grid_rnn.Grid2LSTMCell(num_units=128)
    bidirectional_grid_rnn = tf.nn.bidirectional_dynamic_rnn(cell_fw, cell_bw, input_layer, dtype=tf.float32)
    outputs = tf.reshape(bidirectional_grid_rnn[0], [-1, 256])

    W = tf.Variable(tf.truncated_normal([256,
                                     80],
                                    stddev=0.1, dtype=tf.float32), name='W')
    b = tf.Variable(tf.constant(0., dtype=tf.float32, shape=[80], name='b'))

    logits = tf.matmul(outputs, W) + b
    logits = tf.reshape(logits, [tf.shape(input_layer)[0], -1, 80])
    logits = tf.transpose(logits, (1, 0, 2))

    loss = None
    train_op = None

    if mode != learn.ModeKeys.INFER:
        #Error occurs here
        loss = tf.nn.ctc_loss(inputs=logits, labels=sparse_labels, sequence_length=320)

    ... # returning ModelFnOps

def main(_):
    image_paths, labels = dataset_utils.read_dataset_list('../test/dummy_labels_file.txt')
    data_dir = "../test/dummy_data/"
    images = dataset_utils.read_images(data_dir=data_dir, image_paths=image_paths, image_extension='png')
    print('Done reading images')
    images = dataset_utils.resize(images, (1596, 48))
    images = dataset_utils.transpose(images)
    labels = dataset_utils.encode(labels)
    x_train, x_test, y_train, y_test = dataset_utils.split(features=images, test_size=0.5, labels=labels)

    train_input_fn = tf.estimator.inputs.numpy_input_fn(
        x={"x": np.array(x_train)},
        y=np.array(y_train),
        num_epochs=1,
        shuffle=True,
        batch_size=1
    )

    classifier = learn.Estimator(model_fn=grid_rnn_fn, model_dir="/tmp/grid_rnn_ocr_model")
    classifier.fit(input_fn=train_input_fn)

更新:

事实证明,此解决方案来自这里将密集张量转换为稀疏张量:

It turns out, this solution from here converts the dense tensor into a sparse one:

indices = tf.where(tf.not_equal(labels, tf.constant(0, dtype=tf.int32)))
values = tf.gather_nd(labels, indices)
sparse_labels = tf.SparseTensor(indices, values, dense_shape=tf.shape(labels, out_type=tf.int64))

但是,我现在遇到了ctc_loss引发的错误:

However, I encounter this error now raised by ctc_loss:

ValueError: Shape must be rank 1 but is rank 0 for 'CTCLoss' (op: 'CTCLoss') with input shapes: [?,?,80], [?,2], [?], [].

我有这段代码可以将密集标签转换为稀疏标签:

I have this code that converts dense labels to sparse:

def convert_to_sparse(labels, dtype=np.int32):
    indices = []
    values = []

    for n, seq in enumerate(labels):
        indices.extend(zip([n] * len(seq), range(len(seq))))
        values.extend(seq)

    indices = np.asarray(indices, dtype=dtype)
    values = np.asarray(values, dtype=dtype)
    shape = np.asarray([len(labels), np.asarray(indices).max(0)[1] + 1], dtype=dtype)

    return indices, values, shape

我将y_train转换为稀疏标签,并将值放在SparseTensor中:

I converted y_train to sparse labels, and place the values inside a SparseTensor:

sparse_y_train = convert_to_sparse(y_train)
print(tf.SparseTensor(
    indices=sparse_y_train[0],
    values=sparse_y_train[1],
    dense_shape=sparse_y_train
))

并将其与在grid_rnn_fn内部创建的SparseTensor进行比较:

And compared it to the SparseTensor created inside the grid_rnn_fn:

indices = tf.where(tf.not_equal(labels, tf.constant(0, dtype=tf.int32)))
values = tf.gather_nd(labels, indices)
sparse_labels = tf.SparseTensor(indices, values, dense_shape=tf.shape(labels, out_type=tf.int64))

这就是我得到的:

对于sparse_y_train:

SparseTensor(indices=Tensor("SparseTensor/indices:0", shape=(33, 2), dtype=int64), values=Tensor("SparseTensor/values:0", shape=(33,), dtype=int32), dense_shape=Tensor("SparseTensor/dense_shape:0", shape=(2,), dtype=int64))

对于sparse_labels:

SparseTensor(indices=Tensor("Where:0", shape=(?, 2), dtype=int64), values=Tensor("GatherNd:0", shape=(?,), dtype=int32), dense_shape=Tensor("Shape:0", shape=(2,), dtype=int64))

这使我认为ctc_loss似乎不能将SparseTensors作为具有动态形状的标签来处理.

Which leads me to think that ctc_loss can't seem to handle SparseTensors as labels with dynamic shapes.

推荐答案

是.可以将张量转换为稀疏张量并返回:

Yes. It is possible to convert a tensor to a sparse tensor and back:

sparse为稀疏张量,dense为密集张量.

Let sparse be a sparse tensor and dense be a dense tensor.

从稀疏到密集:

 dense = tf.sparse_to_dense(sparse.indices, sparse.shape, sparse.values)

从密集到稀疏:

zero = tf.constant(0, dtype=tf.float32)
where = tf.not_equal(dense, zero)
indices = tf.where(where)
values = tf.gather_nd(dense, indices)
sparse = tf.SparseTensor(indices, values, dense.shape)

这篇关于将tensor转换为ctc_loss的SparseTensor的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
相关文章
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆