用神经网络逼近正弦函数 [英] Approximating the sine function with a neural network

查看:144
本文介绍了用神经网络逼近正弦函数的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

出于学习目的,我实现了一个简单的神经网络框架,该框架仅支持多层感知器和简单的反向传播.对于线性分类和通常的XOR问题,它可以正常工作,但是对于正弦函数逼近,结果并不令人满意.

For learning purposes, I have implemented a simple neural network framework which only supports multi-layer perceptrons and simple backpropagation. It works okay-ish for linear classification, and the usual XOR problem, but for sine function approximation the results are not that satisfying.

我基本上是在试图用包含6-10个神经元的一个隐藏层来近似一个正弦函数的周期.网络使用双曲正切作为隐藏层的激活函数,并使用线性函数作为输出.结果仍然是对正弦波的相当粗略的估计,需要很长时间才能计算出来.

I'm basically trying to approximate one period of the sine function with one hidden layer consisting of 6-10 neurons. The network uses hyperbolic tangent as an activation function for the hidden layer and a linear function for the output. The result remains a quite rough estimate of the sine wave and takes long to calculate.

我查看了 encog 供参考,但是即使如此,我仍然无法通过简单的反向传播使其工作(通过切换到弹性传播,它开始变得更好,但仍然比超级光滑的R脚本差得多,该脚本在中提供这个类似的问题).那么,我实际上是在尝试做一些不可能的事情吗?通过简单的反向传播(没有动量,没有动态学习率)就不可能近似正弦吗? R中的神经网络库实际使用的方法是什么?

I looked at encog for reference and but even with that I fail to get it work with simple backpropagation (by switching to resilient propagation it starts to get better but is still way worse than the super slick R script provided in this similar question). So am I actually trying to do something that's not possible? Is it not possible to approximate sine with simple backpropagation (no momentum, no dynamic learning rate)? What is the actual method used by the neural network library in R?

编辑:我知道即使使用简单的反向传播,也可以找到一个足够好的近似值(如果您对初始权重非常幸运的话),但是我实际上更想知道这是否是一种可行方法.与我的实现甚至Encog的弹性传播相比,我链接到的R脚本似乎非常快速,强大地收敛(在40个纪元内只有很少的学习样本).我只是想知道是否可以做些什么来改进我的反向传播算法,以获得相同的性能,还是我必须研究一些更高级的学习方法?

EDIT: I know that it is definitely possible to find a good-enough approximation even with simple backpropagation (if you are incredibly lucky with your initial weights) but I actually was more interested to know if this is a feasible approach. The R script I linked to just seems to converge so incredibly fast and robustly (in 40 epochs with only few learning samples) compared to my implementation or even encog's resilient propagation. I'm just wondering if there's something I can do to improve my backpropagation algorithm to get that same performance or do I have to look into some more advanced learning method?

推荐答案

使用用于TensorFlow等神经网络的现代框架,可以很容易地实现这一点.

This can be rather easily implemented using modern frameworks for neural networks like TensorFlow.

例如,每层使用100个神经元的两层神经网络在我的计算机上经过几秒钟的训练,并给出了一个很好的近似值:

For example, a two-layer neural network using 100 neurons per layer trains in a few seconds on my computer and gives a good approximation:

代码也很简单:

import tensorflow as tf
import numpy as np

with tf.name_scope('placeholders'):
    x = tf.placeholder('float', [None, 1])
    y = tf.placeholder('float', [None, 1])

with tf.name_scope('neural_network'):
    x1 = tf.contrib.layers.fully_connected(x, 100)
    x2 = tf.contrib.layers.fully_connected(x1, 100)
    result = tf.contrib.layers.fully_connected(x2, 1,
                                               activation_fn=None)

    loss = tf.nn.l2_loss(result - y)

with tf.name_scope('optimizer'):
    train_op = tf.train.AdamOptimizer().minimize(loss)

with tf.Session() as sess:
    sess.run(tf.global_variables_initializer())

    # Train the network
    for i in range(10000):
        xpts = np.random.rand(100) * 10
        ypts = np.sin(xpts)

        _, loss_result = sess.run([train_op, loss],
                                  feed_dict={x: xpts[:, None],
                                             y: ypts[:, None]})

        print('iteration {}, loss={}'.format(i, loss_result))

这篇关于用神经网络逼近正弦函数的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆