用Keras进行多维回归 [英] Multi-dimensional regression with Keras

查看:655
本文介绍了用Keras进行多维回归的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我想用Keras训练一个用于二维回归的神经网络.

I want to use Keras to train a neural network for 2-dimensional regression.

我的输入是一个数字,我的输出有两个数字:

My input is a single number, and my output has two numbers:

model = Sequential()
model.add(Dense(16, input_shape=(1,), kernel_initializer=initializers.constant(0.0), bias_initializer=initializers.constant(0.0)))
model.add(Activation('relu'))
model.add(Dense(16, input_shape=(1,), kernel_initializer=initializers.constant(0.0), bias_initializer=initializers.constant(0.0)))
model.add(Activation('relu'))
model.add(Dense(2, kernel_initializer=initializers.constant(0.0), bias_initializer=initializers.constant(0.0)))
adam = Adam(lr=0.001, beta_1=0.9, beta_2=0.999, epsilon=1e-08, decay=0.0)
model.compile(loss='mean_squared_error', optimizer=adam)

然后我创建了一些用于训练的虚拟数据:

I then created some dummy data for training:

inputs = np.zeros((10, 1), dtype=np.float32)
targets = np.zeros((10, 2), dtype=np.float32)

for i in range(10):
    inputs[i] = i / 10.0
    targets[i, 0] = 0.1
    targets[i, 1] = 0.01 * i

最后,我对小型批次进行了循环训练,同时对训练数据进行了测试:

And finally, I trained with minibatches in a loop, whilst testing on the training data:

while True:

    loss = model.train_on_batch(inputs, targets)

    test_outputs = model.predict(inputs)

    print test_outputs

问题是,输出的输出如下:

The problem is, the outputs printed out are as follows:

[0.1,0.045] [0.1,0.045] [0.1,0.045] ..... ..... .....

[0.1, 0.045] [0.1, 0.045] [0.1, 0.045] ..... ..... .....

因此,尽管第一个尺寸是正确的(0.1),但是第二个尺寸是不正确的.第二维应为[0.01,0.02,0.03,.....].因此,实际上,网络的输出(0.45)只是第二维中所有值的平均值.

So, whilst the first dimension is correct (0.1), the second dimension is not correct. The second dimension should be [0.01, 0.02, 0.03, .....]. So in fact, the output from the network (0.45) is simply the average of what all the values in the second dimension should be.

我在做什么错了?

推荐答案

问题是,您正在将所有权重初始化为零.问题是,如果所有权重都相同,则所有梯度都相同.因此,好像您的网络在每个层上都具有单个神经元.删除它,以便使用默认的随机初始化并起作用:

The problem is, that you are initializing all the weights with zero. The problem is, that if all weights are the same, then all the gradients are the same. So it is as if you have a network with a single neuron on every layer. Remove that so that the default random initialization is used and it works:

model = Sequential()
model.add(Dense(16, input_shape=(1,)))
model.add(Activation('relu'))
model.add(Dense(16, input_shape=(1,)))
model.add(Activation('relu'))
model.add(Dense(2))
model.compile(loss='mean_squared_error', optimizer='Adam')

1000个纪元后的结果:

The result after 1000 epochs:

Epoch 1000/1000
10/10 [==============================] - 0s - loss: 5.2522e-08

In [59]: test_outputs
Out[59]:
array([[ 0.09983768,  0.00040025],
       [ 0.09986718,  0.010469  ],
       [ 0.09985521,  0.02051429],
       [ 0.09984323,  0.03055958],
       [ 0.09983127,  0.04060487],
       [ 0.09995781,  0.05083206],
       [ 0.09995599,  0.06089856],
       [ 0.09995417,  0.07096504],
       [ 0.09995237,  0.08103154],
       [ 0.09995055,  0.09109804]], dtype=float32)

这篇关于用Keras进行多维回归的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆