在Tensorflow中调整tf.image.resize_images等3D数据的大小 [英] Resize 3D data in tensorflow like tf.image.resize_images

查看:1174
本文介绍了在Tensorflow中调整tf.image.resize_images等3D数据的大小的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我需要调整一些3D数据的大小,例如使用 tf.image.resize_images 方法处理2D数据。

I need to resize some 3D data, like in the tf.image.resize_images method for 2d data.

我当时想我可以尝试在循环和交换轴上运行 tf.image.resize_images ,但是我认为必须有一种更简单的方法。简单的最近邻居应该没事。

I was thinking I could try and run tf.image.resize_images on it in a loop and swap axes, but I thought there must be an easier way. Simple nearest neighbour should be fine.

有什么想法吗?这并不理想,但是我可以解决数据仅为0或1的情况,并使用类似以下内容的方法:

Any ideas? It's not ideal, but I could settle for the case where the data is just 0 or 1 and use something like:

tf.where(boolMap, tf.fill(data_im*2, 0), tf.fill(data_im*2), 1)

但是我不确定如何获取 boolMap 。使用 tf.while_loop 遍历所有值会大大降低性能吗?除非具有某种自动循环并行化功能,否则我会感觉像在GPU上一样。

But I'm not sure how to get boolMap. Would use of tf.while_loop to go over all the values dramatically decrease performance? i feel like it would on GPU unless the have some kind of automatic loop parallelisation.

数据是张量,尺寸为 [batch_size,width,高度,深度,1]

预先感谢。

NB输出尺寸应为:

[batch_size,width * scale,height * scale,depth * scale,1]

我想出了这一点:

def resize3D(self, input_layer, width_factor, height_factor, depth_factor):
    shape = input_layer.shape
    print(shape)
    rsz1 = tf.image.resize_images(tf.reshape(input_layer, [shape[0], shape[1], shape[2], shape[3]*shape[4]]), [shape[1]*width_factor, shape[2]*height_factor])
    rsz2 = tf.image.resize_images(tf.reshape(tf.transpose(tf.reshape(rsz1, [shape[0], shape[1]*width_factor, shape[2]*height_factor, shape[3], shape[4]]), [0, 3, 2, 1, 4]), [shape[0], shape[3], shape[2]*height_factor, shape[1]*width_factor*shape[4]]), [shape[3]*depth_factor, shape[2]*height_factor])

    return tf.transpose(tf.reshape(rsz2, [shape[0], shape[3]*depth_factor, shape[2]*height_factor, shape[1]*width_factor, shape[4]]), [0, 3, 2, 1, 4])

哪个转弯:

进入:

我认为最近的邻居不应该受到楼梯罩的影响(我故意删除了颜色)。

I believe nearest neighbour shouldn't have the stair-casing effect (I intentionally removed the colour).

Hars答案正确运行,但是我想知道如果有人能破解我的问题了。

Hars answer works correctly, however I would like to know whats wrong with mine if anyone can crack it.

推荐答案

我这样做的方法是沿两个轴调整图像的大小,在下面粘贴的代码中,沿深度和宽度重新采样

My approach to this would be to resize the image along two axis, in the code I paste below, I resample along depth and then width

def resize_by_axis(image, dim_1, dim_2, ax, is_grayscale):

    resized_list = []


    if is_grayscale:
        unstack_img_depth_list = [tf.expand_dims(x,2) for x in tf.unstack(image, axis = ax)]
        for i in unstack_img_depth_list:
            resized_list.append(tf.image.resize_images(i, [dim_1, dim_2],method=0))
        stack_img = tf.squeeze(tf.stack(resized_list, axis=ax))
        print(stack_img.get_shape())

    else:
        unstack_img_depth_list = tf.unstack(image, axis = ax)
        for i in unstack_img_depth_list:
            resized_list.append(tf.image.resize_images(i, [dim_1, dim_2],method=0))
        stack_img = tf.stack(resized_list, axis=ax)

    return stack_img

resized_along_depth = resize_by_axis(x,50,60,2, True)
resized_along_width = resize_by_axis(resized_along_depth,50,70,1,True)

其中x是灰度或RGB的3-d张量; resized_along_width是最终调整后的张量。在这里,我们想将3D图像的尺寸调整为(50,60,70)

Where x will be the 3-d tensor either grayscale or RGB; resized_along_width is the final resized tensor. Here we want to resize the 3-d image to dimensions of (50,60,70)

这篇关于在Tensorflow中调整tf.image.resize_images等3D数据的大小的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆