如何在python中实现小批量梯度下降? [英] How to implement mini-batch gradient descent in python?

查看:316
本文介绍了如何在python中实现小批量梯度下降?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我刚刚开始学习深度学习.当涉及到梯度下降时,我发现自己陷于困境.我知道如何实现批量梯度下降.我知道它的工作原理以及理论上的小批量和随机梯度下降的工作原理.但是真的不明白如何在代码中实现.

I have just started to learn deep learning. I found myself stuck when it came to gradient descent. I know how to implement batch gradient descent. I know how it works as well how mini-batch and stochastic gradient descent works in theory. But really can't understand how to implement in code.

import numpy as np
X = np.array([ [0,0,1],[0,1,1],[1,0,1],[1,1,1] ])
y = np.array([[0,1,1,0]]).T
alpha,hidden_dim = (0.5,4)
synapse_0 = 2*np.random.random((3,hidden_dim)) - 1
synapse_1 = 2*np.random.random((hidden_dim,1)) - 1
for j in xrange(60000):
    layer_1 = 1/(1+np.exp(-(np.dot(X,synapse_0))))
    layer_2 = 1/(1+np.exp(-(np.dot(layer_1,synapse_1))))
    layer_2_delta = (layer_2 - y)*(layer_2*(1-layer_2))
    layer_1_delta = layer_2_delta.dot(synapse_1.T) * (layer_1 * (1-layer_1))
    synapse_1 -= (alpha * layer_1.T.dot(layer_2_delta))
    synapse_0 -= (alpha * X.T.dot(layer_1_delta))

这是ANDREW TRASK博客中的示例代码.它很小且易于理解.这段代码实现了批量梯度下降,但是我想在此示例中实现小批量和随机梯度下降.我该怎么办?为了分别实现小批量和随机梯度下降,我必须在此代码中添加/修改什么?您的帮助将对我有很大帮助.在此先感谢.(我知道此示例代码仅包含几个示例,而我需要将大型数据集拆分为多个小批代码.但是我想知道如何实现它)

This is the sample code from ANDREW TRASK's blog. It's small and easy to understand. This code implements batch gradient descent but I would like to implement mini-batch and stochastic gradient descent in this sample. How could I do this? What I have to add/modify in this code in order to implement mini-batch and stochastic gradient descent respectively? Your help will help me a lot. Thanks in advance.( I know this sample code has few examples, whereas I need large dataset to split into mini-batches. But I would like to know how can I implement it)

推荐答案

此函数根据给定的输入和目标返回迷你批处理:

This function returns the mini-batches given the inputs and targets:

def iterate_minibatches(inputs, targets, batchsize, shuffle=False):
    assert inputs.shape[0] == targets.shape[0]
    if shuffle:
        indices = np.arange(inputs.shape[0])
        np.random.shuffle(indices)
    for start_idx in range(0, inputs.shape[0] - batchsize + 1, batchsize):
        if shuffle:
            excerpt = indices[start_idx:start_idx + batchsize]
        else:
            excerpt = slice(start_idx, start_idx + batchsize)
        yield inputs[excerpt], targets[excerpt]

这会告诉您如何使用它进行训练:

and this tells you how to use that for training:

for n in xrange(n_epochs):
    for batch in iterate_minibatches(X, Y, batch_size, shuffle=True):
        x_batch, y_batch = batch
        l_train, acc_train = f_train(x_batch, y_batch)

    l_val, acc_val = f_val(Xt, Yt)
    logging.info('epoch ' + str(n) + ' ,train_loss ' + str(l_train) + ' ,acc ' + str(acc_train) + ' ,val_loss ' + str(l_val) + ' ,acc ' + str(acc_val))

显然,您需要根据要使用的优化库(例如Lasagne,Keras)自己定义f_train,f_val和其他函数.

Obviously you need to define the f_train, f_val and other functions yourself given the optimisation library (e.g. Lasagne, Keras) you are using.

这篇关于如何在python中实现小批量梯度下降?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆