Tensorflow 自动编码器的成本没有降低? [英] Tensorflow autoencoder cost not decreasing?

查看:26
本文介绍了Tensorflow 自动编码器的成本没有降低?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我正在使用 Tensorflow 使用自动编码器进行无监督特征学习.我为 Amazon csv 数据集编写了以下代码,当我运行它时,每次迭代的成本都不会降低.你能帮我找到代码中的错误吗?

I am working on unsupervised feature learning using autoencoders using Tensorflow. I have written following code for the Amazon csv dataset and when I am running it the cost is not decreasing at every iteration. Can you please help me find the bug in the code.

from __future__ import division, print_function, absolute_import

import tensorflow as tf
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
df=pd.read_csv('../dataset/amazon_1_b.csv')
df=df.drop(df.columns[0], axis=1)
#df1, df2 = df[:25000, :], df[25000:, :] if len(df) > 25000 else df, None
df1=df.head(25000)
df2=df.tail(len(df)-25000)
trY=df1['ACTION'].as_matrix()
teY=df2['ACTION'].as_matrix()
df1=df1.drop(df.columns[9], axis=1)
df2=df2.drop(df.columns[9], axis=1)
trX=df1.as_matrix()
teX=df2.as_matrix()



# Parameters
learning_rate = 0.01
training_epochs = 50
batch_size = 20
display_step = 1
examples_to_show = 10

# Network Parameters
n_hidden_1 = 20 # 1st layer num features
n_hidden_2 = 5 # 2nd layer num features
n_input = trX.shape[1] # MNIST data input (img shape: 28*28)

# tf Graph input (only pictures)
X = tf.placeholder("float", [None, n_input])

weights = {
    'encoder_h1': tf.Variable(tf.random_normal([n_input, n_hidden_1])),
    'encoder_h2': tf.Variable(tf.random_normal([n_hidden_1, n_hidden_2])),
    'decoder_h1': tf.Variable(tf.random_normal([n_hidden_2, n_hidden_1])),
    'decoder_h2': tf.Variable(tf.random_normal([n_hidden_1, n_input])),
}
biases = {
    'encoder_b1': tf.Variable(tf.random_normal([n_hidden_1])),
    'encoder_b2': tf.Variable(tf.random_normal([n_hidden_2])),
    'decoder_b1': tf.Variable(tf.random_normal([n_hidden_1])),
    'decoder_b2': tf.Variable(tf.random_normal([n_input])),
}



# Building the encoder
def encoder(x):
    # Encoder Hidden layer with sigmoid activation #1
    layer_1 = tf.nn.sigmoid(tf.add(tf.matmul(x, weights['encoder_h1']),
                                   biases['encoder_b1']))
    # Decoder Hidden layer with sigmoid activation #2
    layer_2 = tf.nn.sigmoid(tf.add(tf.matmul(layer_1, weights['encoder_h2']),
                                   biases['encoder_b2']))
    return layer_2


# Building the decoder
def decoder(x):
    # Encoder Hidden layer with sigmoid activation #1
    layer_1 = tf.nn.sigmoid(tf.add(tf.matmul(x, weights['decoder_h1']),
                                   biases['decoder_b1']))
    # Decoder Hidden layer with sigmoid activation #2
    layer_2 = tf.nn.sigmoid(tf.add(tf.matmul(layer_1, weights['decoder_h2']),
                                   biases['decoder_b2']))
    return layer_2

# Construct model
encoder_op = encoder(X)
decoder_op = decoder(encoder_op)

# Prediction
y_pred = decoder_op
# Targets (Labels) are the input data.
y_true = X

# Define loss and optimizer, minimize the squared error
cost = tf.reduce_mean(tf.pow(y_true - y_pred, 2))
optimizer = tf.train.RMSPropOptimizer(learning_rate).minimize(cost)

# Initializing the variables
init = tf.initialize_all_variables()



# Launch the graph
# Using InteractiveSession (more convenient while using Notebooks)
sess = tf.InteractiveSession()
sess.run(init)

total_batch = int(trX.shape[0]/batch_size)
# Training cycle
for epoch in range(training_epochs):
    # Loop over all batches
    for i in range(total_batch):
        batch_xs= trX[batch_size*i:batch_size*(i+1)]
        # Run optimization op (backprop) and cost op (to get loss value)
        _, c = sess.run([optimizer, cost], feed_dict={X: batch_xs})
    # Display logs per epoch step
    if epoch % display_step == 0:
        print("Epoch:", '%04d' % (epoch+1),
              "cost=", "{:.9f}".format(c))

print("Optimization Finished!")

# Applying encode and decode over test set
encode_decode = sess.run(
    y_pred, feed_dict={X: teX})

数据集的链接是这里.python 文件的链接是这里.

The link to the dataset is here. The link to the python file is here.

以下是第 31 个 epoch 的结果,直到所有 50 个 epoch 都保持不变.

Following is the result untill 31 epoch and it remains same till all 50 epoch.

Epoch: 0001 cost= 18134403072.000000000
Epoch: 0002 cost= 18134403072.000000000
Epoch: 0003 cost= 18134403072.000000000
Epoch: 0004 cost= 18134403072.000000000
Epoch: 0005 cost= 18134403072.000000000
Epoch: 0006 cost= 18134403072.000000000
Epoch: 0007 cost= 18134403072.000000000
Epoch: 0008 cost= 18134403072.000000000
Epoch: 0009 cost= 18134403072.000000000
Epoch: 0010 cost= 18134403072.000000000
Epoch: 0011 cost= 18134403072.000000000
Epoch: 0012 cost= 18134403072.000000000
Epoch: 0013 cost= 18134403072.000000000
Epoch: 0014 cost= 18134403072.000000000
Epoch: 0015 cost= 18134403072.000000000
Epoch: 0016 cost= 18134403072.000000000
Epoch: 0017 cost= 18134403072.000000000
Epoch: 0018 cost= 18134403072.000000000
Epoch: 0019 cost= 18134403072.000000000
Epoch: 0020 cost= 18134403072.000000000
Epoch: 0021 cost= 18134403072.000000000
Epoch: 0022 cost= 18134403072.000000000
Epoch: 0023 cost= 18134403072.000000000
Epoch: 0024 cost= 18134403072.000000000
Epoch: 0025 cost= 18134403072.000000000
Epoch: 0026 cost= 18134403072.000000000
Epoch: 0027 cost= 18134403072.000000000
Epoch: 0028 cost= 18134403072.000000000
Epoch: 0029 cost= 18134403072.000000000
Epoch: 0030 cost= 18134403072.000000000
Epoch: 0031 cost= 18134403072.000000000

推荐答案

您的优化方法 RMSPropOptimizer 在这种情况下似乎很慢.

Your optimization method, RMSPropOptimizer, seems really slow in this case.

您可能想尝试亚当斯的解决方案,至少对我有用.

You may want to try adams' solution, at least that worked for me.

这篇关于Tensorflow 自动编码器的成本没有降低?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆