Tensorflow LSTM错误(ValueError:形状必须等于等级,但必须为2和1) [英] Tensorflow LSTM Error (ValueError: Shapes must be equal rank, but are 2 and 1 )

查看:179
本文介绍了Tensorflow LSTM错误(ValueError:形状必须等于等级,但必须为2和1)的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我知道这个问题已经问过很多次了,但是我对tensorflow还是陌生的,以前的线程都无法解决我的问题.我正在尝试为一系列传感器数据实现LSTM,以对数据进行分类.我希望我的数据被分类为0或1,因此它是一个二进制分类器.我共有2539个样本,每个样本都有555个time_step,每个time_step包含9个特征,因此我的输入具有形状(2539、555、9),对于每个样本,我都有一个标签数组,其中包含值0或1,其值shape就像这样(2539,1),每列的值都为0或1.我在下面准备了这段代码,但是我发现有关logit和标签的维数的错误.无论我如何重塑它们,我仍然会出错.

I know this questions have been asked many times but i am kind of new to tensorflow and none of the previous threads could solve my issue. I am trying to implement a LSTM for series of sensor data to classify data. I want my data be classified as 0 or 1 so its a binary classifier. I have over all 2539 samples which each of them have 555 time_steps and each time_step carries 9 features so my input has shape (2539, 555, 9) and for each sample and i have a label array which hold the value 0 or 1 which its shape is like this (2539, 1) which each column has a value 0 or 1. I have prepared this code below but I get error regarding to dimensionality of my logits and labels. No matter how I reshape them I still get errors.

能帮我理解这个问题吗?

Can you please help me understand the problem?

 X_train,X_test,y_train,y_test = train_test_split(final_training_set, labels, test_size=0.2, shuffle=False, random_state=42)


epochs = 10
time_steps = 555
n_classes = 2
n_units = 128
n_features = 9
batch_size = 8

x= tf.placeholder('float32',[batch_size,time_steps,n_features])
y = tf.placeholder('float32',[None,n_classes])

###########################################
out_weights=tf.Variable(tf.random_normal([n_units,n_classes]))
out_bias=tf.Variable(tf.random_normal([n_classes]))
###########################################

lstm_layer=tf.nn.rnn_cell.LSTMCell(n_units,state_is_tuple=True)
initial_state = lstm_layer.zero_state(batch_size, dtype=tf.float32)
outputs,states = tf.nn.dynamic_rnn(lstm_layer, x,
                                   initial_state=initial_state,
                                   dtype=tf.float32)


###########################################
output=tf.matmul(outputs[-1],out_weights)+out_bias
print(np.shape(output))

logit = output
logit = (logit, [-1])

cost = tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(logits=logit, labels=labels))
optimizer = tf.train.AdamOptimizer().minimize(cost)
with tf.Session() as sess:

        tf.global_variables_initializer().run()
        tf.local_variables_initializer().run()

        for epoch in range(epochs):
            epoch_loss = 0

            i = 0
            for i in range(int(len(X_train) / batch_size)):

                start = i
                end = i + batch_size

                batch_x = np.array(X_train[start:end])
                batch_y = np.array(y_train[start:end])

                _, c = sess.run([optimizer, cost], feed_dict={x: batch_x, y: batch_y})

                epoch_loss += c

                i += batch_size

            print('Epoch', epoch, 'completed out of', epochs, 'loss:', epoch_loss)

        pred = tf.round(tf.nn.sigmoid(logit)).eval({x: np.array(X_test), y: np.array(y_test)})

        f1 = f1_score(np.array(y_test), pred, average='macro')

        accuracy=accuracy_score(np.array(y_test), pred)


        print("F1 Score:", f1)
        print("Accuracy Score:",accuracy)

这是错误:

ValueError:形状必须等于等级,但必须为2和1
将形状0与其他形状合并.输入形状为[555,2],[1]的"logistic_loss/logits"(op:"Pack").

ValueError: Shapes must be equal rank, but are 2 and 1
From merging shape 0 with other shapes. for 'logistic_loss/logits' (op: 'Pack') with input shapes: [555,2], [1].

推荐答案

只是更新,问题出在标签的形状上.在为标签添加onehot编码并解决了二维问题之后.

Just an update the problem was with the shape of Labels. After adding onehot encoding for labels and make the 2dimensional problem was solved.

这篇关于Tensorflow LSTM错误(ValueError:形状必须等于等级,但必须为2和1)的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
相关文章
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆