使用学习的人工神经网络来解决输入 [英] Using a learned Artificial Neural Network to solve inputs

查看:72
本文介绍了使用学习的人工神经网络来解决输入的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我最近再次深入研究人工神经网络,包括进化和训练.我有一个问题,关于用什么方法(如果有的话)来解决会导致目标输出集的输入.有这个名字吗?我试图寻找的所有东西都会引导我进行反向传播,这不一定是我需要的.在我的搜索中,我最接近表达我的问题的是

I've recently been delving into artificial neural networks again, both evolved and trained. I had a question regarding what methods, if any, to solve for inputs that would result in a target output set. Is there a name for this? Everything I try to look for leads me to backpropagation which isn't necessarily what I need. In my search, the closest thing I've come to expressing my question is

是否可以在反转?

这告诉我,对于具有不同层节点数的网络,确实会有很多解决方案,而且这些解决方案并不容易解决.我的想法是使用在学习过程中建立的权重朝着一组理想的输入迈进.有没有其他人有过这样做的经验?

Which told me that there, indeed, would be many solutions for networks that had varying numbers of nodes for the layers and they would not be trivial to solve for. I had the idea of just marching toward an ideal set of inputs using the weights that have been established during learning. Does anyone else have experience doing something like this?

为了详细说明:假设您有一个包含 401 个输入节点的网络,代表一个 20x20 灰度图像和一个偏差,两个隐藏层由 100+25 个节点组成,以及 6 个代表分类(符号、罗马数字等)的输出节点.在训练神经网络使其能够以可接受的误差进行分类之后,我想向后运行网络.这意味着我会在输出中输入一个我希望看到的分类,并且网络会想象一组会导致预期输出的输入.所以对于罗马数字的例子,这可能意味着我会要求它为符号X"反向运行网络,它会生成一个图像,类似于网络认为的X"的样子.通过这种方式,我可以很好地了解它学到的用于分离分类的功能.我觉得这对于理解 ANN 的运作方式和在宏观事物中学习的方式非常有益.

In order to elaborate: Say you have a network with 401 input nodes which represents a 20x20 grayscale image and a bias, two hidden layers consisting of 100+25 nodes, as well as 6 output nodes representing a classification (symbols, roman numerals, etc). After training a neural network so that it can classify with an acceptable error, I would like to run the network backwards. This would mean I would input a classification in the output that I would like to see, and the network would imagine a set of inputs that would result in the expected output. So for the roman numeral example, this could mean that I would request it to run the net in reverse for the symbol 'X' and it would generate an image that would resemble what the net thought an 'X' looked like. In this way, I could get a good idea of the features it learned to separate the classifications. I feel as it would be very beneficial in understanding how ANNs function and learn in the grand scheme of things.

推荐答案

对于一个简单的前馈全连接神经网络,可以通过对激活函数取逆来将隐藏单元激活投影到像素空间中(例如 Logit for sigmoid单位),将其除以传入权重的总和,然后将该值乘以每个像素的权重.这将给出由这个隐藏单元识别的平均模式的可视化.总结每个隐藏单元的这些模式将产生平均模式,对应于这组特定的隐藏单元活动.原则上可以应用相同的过程将输出激活投影到隐藏单元活动模式中.

For a simple feed-forward fully connected NN, it is possible to project hidden unit activation into pixel space by taking inverse of activation function (for example Logit for sigmoid units), dividing it by sum of incoming weights and then multiplying that value by weight of each pixel. That will give visualization of average pattern, recognized by this hidden unit. Summing up these patterns for each hidden unit will result in average pattern, that corresponds to this particular set of hidden unit activities.Same procedure can be in principle be applied to to project output activations into hidden unit activity patterns.

这对于分析 NN 在图像识别中学到的特征确实很有用.对于更复杂的方法,您可以查看这篇论文(除了它包含 NN 可以学习的模式示例的所有内容).

This is indeed useful for analyzing what features NN learned in image recognition. For more complex methods you can take a look at this paper (besides everything it contains examples of patterns that NN can learn).

您不能完全反向运行 NN,因为它不会记住来自源图像的所有信息 - 只有它学会检测的模式.所以网络不能想象一组输入".但是,可以对概率分布进行采样(将权重作为每个像素的激活概率)并产生一组可以被特定神经元识别的模式.

You can not exactly run NN in reverse, because it does not remember all information from source image - only patterns that it learned to detect. So network cannot "imagine a set inputs". However, it possible to sample probability distribution (taking weight as probability of activation of each pixel) and produce a set of patterns that can be recognized by particular neuron.

这篇关于使用学习的人工神经网络来解决输入的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆