向 tflearn CNN 添加多个元数据 [英] adding multiple metadata to tflearn CNN
问题描述
我使用 CNN 进行(医学)图像分析和预测,使用的是典型的 CNN.我像这样向 CNN 网络添加了一组元数据,它似乎有效:network = input_data(shape=[..],..)metadata_1 = input_data(shape=[..],..)
I'm using CNN for (medical) image analysis and prediction, using the typical CNN. I added one set of metadata to the CNN network like this and it seems to work: network = input_data(shape=[..],..) metadata_1 = input_data(shape=[..],..)
network = <convolutions and some max pooling>
network = fully_connected(network, 100,..>
network = merge (network, metadata_1)
network = fully_connected ()
...
现在,我可以扩展它来做到这一点吗?任何人都有任何经验?和陷阱?
Now, could i extend this to do this? Anyone has any experience? and pitfalls?
network = input_data(shape=[..],..)
metadata_1 = input_data(shape=[..],..)
...
metadata_n = input_data(shape=[..],..)
network = <convolutions and some max pooling>
network = fully_connected(network, 100,..>
network = merge (network, metadata_1)
...
network = merge (network, metadata_n)
network = fully_connected ()
...
提前致谢.
推荐答案
我认为您在这里谈论的是层级联.至少那是我在我的 CNN 中使用的.
I think you're talking about layer concatenation here. At least that's what I used in my CNNs.
现在,在您的情况下,您要将元数据添加到连续层 n 次.这会产生 n 个额外的层,这可能会占用大量内存.我觉得更直观的是使用 concat层并将 conv 和所有元数据层连接在一起.
Now in your case you're adding metadata into consecutive layers n-times. This produces n extra layers, which can become memory intensive. What I find more intuitive is to use concat layer and concatenate conv and all metadata layers together.
network = <convolutions and some max pooling>
network = fully_connected(network, 100,..>
network = concat (network, metadata_1, metadata_2, ..., metadata_n)
network = fully_connected ()
...
您的方法可能会得到不同的结果,但我怀疑不会有太大差异.如果你想知道,你应该两者都试试.
You might get different results with your approach, but I suspect there won't be much difference. If you want to know you should try both.
这篇关于向 tflearn CNN 添加多个元数据的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!