如何在Tensorflow中从CNN获取完全连接层的输出? [英] How to get the output of the fully connected layer from CNN in Tensorflow?

查看:201
本文介绍了如何在Tensorflow中从CNN获取完全连接层的输出?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我想使用张量流从卷积神经网络的完全连接层中提取 CNN激活.在以下帖子中,一个用户问了这个问题:

I want to extract CNN activations from the fully connected layer in a convolution neural network using tensorflow. In the following post, a user asked that question:

如何使用Tensorflow从CNN层提取激活?/a>

How to extract activation from CNN layers using tensorflow?

答案是这样的:

sess = tf.InteractiveSesssion()

full_connected = ....
value_of_fully_connected = sess.run(fully_connected,feed_dict={your_placeholders_and_values)

但是,在我的代码中,完全连接的层与 tf.session()分开.我有用于计算卷积的函数的以下代码:

However, in my code, the fully connected layer is separated from the tf.session(). and I have this code for the function that computes the convolutions:

def conv_net(x, weights, biases):  

    conv1 = conv2d(x, weights['wc1'], biases['bc1'])
    conv1 = maxpool2d(conv1, k=2)

    conv2 = conv2d(conv1, weights['wc2'], biases['bc2'])
    conv2 = maxpool2d(conv2, k=2)

    conv3 = conv2d(conv2, weights['wc3'], biases['bc3'])
    conv3 = maxpool2d(conv3, k=2)


    # Fully connected layer
    fc1 = tf.reshape(conv3, [-1, weights['wd1'].get_shape().as_list()[0]])
    fc1 = tf.add(tf.matmul(fc1, weights['wd1']), biases['bd1'])
    fc1 = tf.nn.relu(fc1)

    out = tf.add(tf.matmul(fc1, weights['out']), biases['out'])
    return out

然后是预测:

pred = conv_net(x, weights, biases)

cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits=pred, labels=y))

optimizer = tf.train.AdamOptimizer(learning_rate=learning_rate).minimize(cost)

这是培训:

with tf.Session() as sess:
    sess.run(init) 
    train_loss = []
    test_loss = []
    train_accuracy = []
    test_accuracy = []
    summary_writer = tf.summary.FileWriter('./Output', sess.graph)
    for i in range(training_iters):
        for batch in range(len(train_X)//batch_size):
            batch_x = train_X[batch*batch_size:min((batch+1)*batch_size,len(train_X))]
            batch_y = train_y[batch*batch_size:min((batch+1)*batch_size,len(train_y))]    
            # Run optimization op (backprop).
                # Calculate batch loss and accuracy
            opt = sess.run(optimizer, feed_dict={x: batch_x,
                                                              y: batch_y})
            loss, acc = sess.run([cost, accuracy], feed_dict={x: batch_x,
                                                              y: batch_y})
        print("Iter " + str(i) + ", Loss= " + \
                      "{:.6f}".format(loss) + ", Training Accuracy= " + \
                      "{:.5f}".format(acc))
        print("Optimization Finished!")

        # Calculate accuracy for all 10000 mnist test images
        test_acc,valid_loss = sess.run([accuracy,cost], feed_dict={x: test_X,y : test_y})
        train_loss.append(loss)
        test_loss.append(valid_loss)
        train_accuracy.append(acc)
        test_accuracy.append(test_acc)
        print("Testing Accuracy:","{:.5f}".format(test_acc))
    summary_writer.close()

如您所见,完全连接的层位于函数 conv_net()内部,而我似乎无法从 tf.session()内部访问该层代码.

As you can see, the fully connected layer is inside the function conv_net(), and I cannot seem to have access to that from inside tf.session() code.

我需要访问该完全连接的层,因此我可以在上面的帖子中使用答案.我该怎么办?

I need to have access to that fully connected layer so I can use the answer in the post above. How can I do that?

推荐答案

在python中,您可以返回函数的输出列表.所以,我会做类似的事情:

in python, you can return a list of output from a function. So, I would do something like:

def conv_net(x, weights, biases):  

    conv1 = conv2d(x, weights['wc1'], biases['bc1'])
    conv1 = maxpool2d(conv1, k=2)

    conv2 = conv2d(conv1, weights['wc2'], biases['bc2'])
    conv2 = maxpool2d(conv2, k=2)

    conv3 = conv2d(conv2, weights['wc3'], biases['bc3'])
    conv3 = maxpool2d(conv3, k=2)


    # Fully connected layer
    fc1 = tf.reshape(conv3, [-1, weights['wd1'].get_shape().as_list()[0]])
    fc1 = tf.add(tf.matmul(fc1, weights['wd1']), biases['bd1'])
    fc1 = tf.nn.relu(fc1)

    out = tf.add(tf.matmul(fc1, weights['out']), biases['out'])
    return [out,fc1]

,当您想要获得输出时,您可以执行以下操作:

and when you want to get the outputs, you do:

pred, fcn = conv_net(x, weights, biases)

要在会话中查看结果时,请执行以下操作:

When you wanna see the result inside the session, do:

fcn_evaluated = sess.run(fcn)
print(fcn_evaluated)

这篇关于如何在Tensorflow中从CNN获取完全连接层的输出?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆