神经网络,用于将两个整数相加 [英] Neural network for adding two integer numbers

查看:152
本文介绍了神经网络,用于将两个整数相加的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我是NeuralNets的初学者.我想创建一个可以添加两个整数的神经网络.我设计如下: 问题,我的准确率非常低,只有0.002%.我该怎么做才能增加呢? 1)用于创建数据:

I am beginner in NeuralNets . I want to create a neural network which can add two integer numbers. I have designed it as follows : question I have really low accuracy of 0.002% . what can i do to increase it? 1) For creating data:

import numpy as np
import random 
a=[]
b=[]
c=[]

for i in range(1, 1001):
    a.append(random.randint(1,999))
    b.append(random.randint(1,999))
    c.append(a[i-1] + b[i-1])

X = np.array([a,b]).transpose()
y = np.array(c).transpose().reshape(-1, 1)

2)扩展我的数据:

from sklearn.preprocessing import MinMaxScaler
minmax = MinMaxScaler()
minmax2 = MinMaxScaler()
X = minmax.fit_transform(X)
y = minmax2.fit_transform(y)

3)网络:


from keras import Sequential
from keras.layers import Dense
from keras.optimizers import SGD

clfa = Sequential()
clfa.add(Dense(input_dim=2, output_dim=2, activation='sigmoid', kernel_initializer='he_uniform'))
clfa.add(Dense(output_dim=2, activation='sigmoid', kernel_initializer='uniform'))
clfa.add(Dense(output_dim=2, activation='sigmoid', kernel_initializer='uniform'))
clfa.add(Dense(output_dim=2, activation='sigmoid', kernel_initializer='uniform'))
clfa.add(Dense(output_dim=1, activation='relu'))

opt = SGD(lr=0.01)
clfa.compile(opt, loss='mean_squared_error', metrics=['acc'])
clfa.fit(X, y, epochs=140)

输出:

Epoch 133/140
1000/1000 [==============================] - 0s 39us/step - loss: 0.0012 - acc: 0.0020
Epoch 134/140
1000/1000 [==============================] - 0s 40us/step - loss: 0.0012 - acc: 0.0020   
Epoch 135/140
1000/1000 [==============================] - 0s 41us/step - loss: 0.0012 - acc: 0.0020
Epoch 136/140
1000/1000 [==============================] - 0s 40us/step - loss: 0.0012 - acc: 0.0020
Epoch 137/140
1000/1000 [==============================] - 0s 41us/step - loss: 0.0012 - acc: 0.0020
Epoch 138/140
1000/1000 [==============================] - 0s 42us/step - loss: 0.0012 - acc: 0.0020   
Epoch 139/140
1000/1000 [==============================] - 0s 40us/step - loss: 0.0012 - acc: 0.0020   
Epoch 140/140
1000/1000 [==============================] - 0s 42us/step - loss: 0.0012 - acc: 0.0020 

这是我的带有控制台输出的代码.

That is my code with console outputs..

我尝试了优化器,损失和激活的每种不同组合,而且这些数据非常适合线性回归.

I have tried every different combinations of optimizers, losses, and activations, plus this data fits perfectly a Linear Regression.

推荐答案

两个错误,几个问题.

错误:

  • 这是一个回归问题,因此最后一层的激活应为linear,而不是relu(不指定任何内容即可使用,因为linear在回归中毫无意义;从模型编译中删除metrics=['acc']-您应该仅凭损失来判断模型的性能.
  • This is a regression problem, so the activation of the last layer should be linear, not relu (leaving it without specifying anything will work, since linear is the default activation in a Keras layer).
  • Accuracy is meaningless in regression; remove metrics=['acc'] from your model compilation - you should judge the performance of your model only with your loss.

问题:

  • 我们不对中间层使用sigmoid激活;将它们全部更改为relu.
  • 删除kernel_initializer参数,从而保留默认 glorot_uniform推荐一个.
  • 许多密集层,每个仅具有两个节点并不是一个好主意;尝试减少层数并增加节点数.请参阅此处虹膜数据的简单示例网络.
  • We don't use sigmoid activations for the intermediate layers; change all of them to relu.
  • Remove the kernel_initializer argument, thus leaving the default glorot_uniform, which is the recommended one.
  • A number of Dense layers each one only with two nodes is not a good idea; try reducing the number of layers and increasing the number of nodes. See here for a simple example network for the iris data.

这篇关于神经网络,用于将两个整数相加的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆