Tensorflow 错误在单个数据集上产生的结果 [英] Tensorflow error producing result on a single dataset

查看:28
本文介绍了Tensorflow 错误在单个数据集上产生的结果的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我正在使用 tensorflow 尝试我的第一个 NN,但无法为 单个 输入样本生成结果.我创建了一个最小的例子,我给它提供多个 y = a * x + b 输入(对于不同的 ab)并尝试得到一个结果,但它失败了.请注意,我不关心这里的准确性,我是作为 POC 这样做的.部分参数如下:

I am trying my first NN with tensorflow and unable to produce results for a single input sample. I have created a minimal example where I am feeding it multiple y = a * x + b inputs (for varying a, b) and trying to get a result back but it fails. Note that I don't care about accuracy here, I'm doing this as a POC. Some parameters are below:

  • N 是 x 个网格点的数量.每个输入行的长度为 2*N(N 代表 x,N 代表 y).
  • M 是我给出的训练行数.
  • 2 是我期望的输出数量(ab).
  • N is the number of x grid points. Each input row is of length 2*N (N for x, N for y).
  • M is the number of training rows I give.
  • 2 is the number of outputs I expect (a and b).

因此,我的训练数据是大小为 (m, 2*n)x_train 和大小为 (m,2).似乎我构建的模型没问题,但我无法为它提供大小为 (1, 2*n)single 输入并获得大小为 的结果>(1, 2) 根据需要.相反,我收到以下错误:

Thus, my training data is x_train of size (m, 2*n) and y_train of size (m, 2). It seems that I build the model OK but I am unable to feed it a single input of size (1, 2*n) and get a result back of size (1, 2) as desired. Instead I get the following error:

Traceback (most recent call last):
  File "xdriver.py", line 92, in <module>
    main()
  File "xdriver.py", line 89, in main
    ab2 = model.predict(rys) # This fails
  File "/apps/anaconda/lib/python3.7/site-packages/tensorflow_core/python/keras/engine/training.py", line 909, in predict
    use_multiprocessing=use_multiprocessing)
  File "/apps/anaconda/lib/python3.7/site-packages/tensorflow_core/python/keras/engine/training_v2.py", line 462, in predict
    steps=steps, callbacks=callbacks, **kwargs)
  File "/apps/anaconda/lib/python3.7/site-packages/tensorflow_core/python/keras/engine/training_v2.py", line 396, in _model_iteration
    distribution_strategy=strategy)
  File "/apps/anaconda/lib/python3.7/site-packages/tensorflow_core/python/keras/engine/training_v2.py", line 594, in _process_inputs
    steps=steps)
  File "/apps/anaconda/lib/python3.7/site-packages/tensorflow_core/python/keras/engine/training.py", line 2472, in _standardize_user_data
    exception_prefix='input')
  File "/apps/anaconda/lib/python3.7/site-packages/tensorflow_core/python/keras/engine/training_utils.py", line 574, in standardize_input_data
    str(data_shape))
ValueError: Error when checking input: expected dense_input to have shape (20,) but got array with shape (1,)

下面是我正在使用的代码,这是我能够开发以重现这一点的最小示例(以及解释我的过程的文档).谁能评估我做错了什么以及要改变什么?

Below is the code I am using which is the minimal example I have been able to develop to reproduce this (along with documentation to explain my process). Can anyone assess what I am doing wrong and what to change?

#!/usr/bin/env python3

import numpy as np
import tensorflow as tf
from tensorflow import keras
from tensorflow.keras import layers

#################
### CONSTANTS ###
#################
ARANGE = (-5.0, 5.0) # Possible values for m in training data
BRANGE = (0.0, 10.0) # Possible values for b in training data
X_MIN = 1.0 
X_MAX = 9.0 
N = 10 # Number of grid points
M = 2 # Number of {(x,y)} sets to train on


def gen_ab(arange, brange):
    """ mrange, brange are tuples of floats """
    a = (arange[1] - arange[0])*np.random.rand() + arange[0]
    b = (brange[1] - brange[0])*np.random.rand() + brange[0]

    return (a, b)

def build_model(x_data, y_data):
    """ Build the model using input / output training data
    Args:
        x_data (np array): Size (m, n*2) grid of input training data.
        y_data (np array): Size (m, 2) grid of output training data.
    Returns:
        model (Sequential model)
    """
    model = keras.Sequential()
    model.add(layers.Dense(64, activation='relu', input_dim=len(x_data[0])))
    model.add(layers.Dense(len(y_data[0])))

    optimizer = tf.keras.optimizers.RMSprop(0.001)
    model.compile(loss='mse', optimizer=optimizer, metrics=['mae', 'mse'])

    return model


def gen_data(xs, arange, brange, m):
    """ Generate training data for lines of y = m*x + b
    Args:
        xs (list): Grid points (size N1)
        arange (tuple): Range to use for a (a_min, a_max)
        brange (tuple): Range to use for b (b_min, b_max)
        m (int): Number of y grids to generate
    Returns:
        x_data (np array): Size (m, n*2) grid of input training data.
        y_data (np array): Size (m, 2) grid of output training data.
    """
    n = len(xs)
    x_data = np.zeros((m, 2*n))
    y_data = np.zeros((m, 2))
    for ix in range(m):
        (a, b) = gen_ab(arange, brange)
        ys = a*xs + b*np.ones(xs.size)
        x_data[ix, :] = np.concatenate((xs, ys))
        y_data[ix, :] = [a, b]

    return (x_data, y_data)

def main():
    """ Main routin """
    # Generate the x axis grid to be used for all training sets
    xs = np.linspace(X_MIN, X_MAX, N)

    # Generate the training data
    # x_train has M rows (M is the number of training samples)
    # x_train has 2*N columns (first N columns are x, second N columns are y)
    # y_train has M rows, each of which has two columns (a, b) for y = ax + b
    (x_train, y_train) = gen_data(xs, ARANGE, BRANGE, M)

    model = build_model(x_train, y_train)
    model.fit(x_train, y_train, epochs=10, batch_size=32)
    model.summary()

    ####################
    ### Test example ###
    ####################
    (a, b) = gen_ab(ARANGE, BRANGE)
    ys = a*xs + b*np.ones(xs.size)
    rys = np.concatenate((xs, ys))
    ab1 = model.predict(x_train) # This succeeds
    print(a, b)
    print(ab1)
    ab2 = model.predict(rys) # This fails

if __name__ == "__main__":
    main()

推荐答案

事实证明,解决方案非常简单.您只需要将输入数据作为一批大小传入.改变:

The solution to this turned out to be pretty simple. You simply need to pass in the input data as a batch of size one. Changing:

ab2 = model.predict(rys)

ab2 = model.predict(np.array([rys]))

成功了!

这篇关于Tensorflow 错误在单个数据集上产生的结果的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆