Keras模型精度未提高 [英] Keras model accuracy not improving

查看:66
本文介绍了Keras模型精度未提高的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我正在尝试训练神经网络,以通过电竞预测FIFA 18中球员的评分(评分介于64-99之间).我正在使用他们的球员数据库( https://easports.com/fifa/ultimate-team/api/fut/item?page = 1 ),并且我已经将数据处理为training_x,testing_x,training_y,testing_y.每个训练样本都是一个包含7个值的numpy数组...前6个是玩家的不同状态(投篮,传球,盘带等),最后一个值是玩家的位置(我将其映射为1-8(取决于位置),并且每个测试值都是64-99之间的单个整数,表示该玩家的评分.

I'm trying to train a neural network to predict the ratings for players in FIFA 18 by easports (ratings are between 64-99). I'm using their players database (https://easports.com/fifa/ultimate-team/api/fut/item?page=1) and I've processed the data into training_x, testing_x, training_y, testing_y. Each of the training samples is a numpy array containing 7 values...the first 6 are the different stats of the player (shooting, passing, dribbling, etc) and the last value is the position of the player (which I mapped between 1-8, depending on the position), and each of the testing values is a single integer between 64-99, representing the rating of that player.

我尝试了许多不同的超参数,包括将激活函数更改为tanh和relu,并且尝试在第一个密集层之后添加批处理规范化层(我认为这可能很有用,因为我的功能之一是非常小,其他功能在50-99之间),我使用了SGD优化器(更改了学习率,动力,甚至尝试将优化器更改为Adam),尝试了不同的损失函数,添加/删除了辍学层,并对模型的权重尝试了不同的正则化器.

I've tried many different hyperparameters, including changing the activation functions to tanh and relu, and I've tried adding a batch normalization layer after the first dense layer (I thought that it might be useful since one of my features is very small and the other features are between 50-99), I've played around with the SGD optimizer (changed the learning rate, momentum, even tried changing the optimizer to Adam), tried different loss functions, added/removed dropout layers, and tried different regularizers for the weights of the model.

model = Sequential()
model.add(Dense(64, input_shape=(7,), 
          kernel_regularizer=regularizers.l2(0.01)))
//batch normalization?
model.add(Activation('sigmoid'))
model.add(Dense(64, kernel_regularizer=regularizers.l2(0.01), 
          activation='sigmoid'))
model.add(Dropout(0.3))
model.add(Dense(32, kernel_regularizer=regularizers.l2(0.01), 
          activation='sigmoid'))
model.add(Dense(1, activation='linear'))
sgd = optimizers.SGD(lr=0.01, decay=1e-6, momentum=0.9, nesterov=True)
model.compile(loss='mean_absolute_error', metrics=['accuracy'], 
          optimizer=sgd)
model.fit(training_x, training_y, epochs=50, batch_size=128, shuffle=True)

当我训练模型时,即使我尝试调整许多不同的参数,损耗也总是nan,精度始终是0.但是,如果我从数据中删除了最后一个特征,玩家的位置并更新了第一个致密层的输入形状,则无论我更改了什么参数,该模型实际上都会训练"并最终以6%左右的精度结束.在那种情况下,我发现该模型仅预测79为玩家的评分.我在做什么本质上是错的?

When I train the model, the loss is always nan and the accuracy is always 0, even though I've tried adjusting a lot of different parameters. However, if I remove the last feature from my data, the position of the players, and update the input shape of the first dense layer, the model actually "trains" and ends up with around 6% accuracy no matter what parameters I change. In that case, I've found that the model only predicts 79 to be the player's rating. What am I doing inherently wrong?

推荐答案

您可以尝试以下步骤:

  1. 使用均方误差损失函数.
  2. 使用 Adam ,这将帮助您以较低的学习率(例如0.0001或0.001)更快地收敛.否则,请尝试使用 RMSprop优化器.
  3. 使用默认正则化器.其实没有.
  4. 由于这是一项回归任务,因此在除输出层(包括输入层)之外的所有层中使用像ReLU这样的激活功能.在输出层中使用线性激活.
  5. 如@pooyan的评论中所述,标准化功能.请参见此处.甚至尝试标准化功能.使用最适合的套房.
  1. Use mean squared error loss function.
  2. Use Adam which will help you converge faster with low learning rate like 0.0001 or 0.001. Otherwise, try using the RMSprop optimizer.
  3. Use the default regularizers. That is none actually.
  4. Since this is a regression task, use activation function like ReLU in all the layers except the output layer ( including the input layer ). Use linear activation in output layer.
  5. As mentioned in the comments by @pooyan , normalize the features. See here. Even try standardizing the features. Use whichever suites the best.

这篇关于Keras模型精度未提高的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆