有人可以向我解释逻辑回归中成本函数和梯度下降方程之间的区别吗? [英] Can someone explain to me the difference between a cost function and the gradient descent equation in logistic regression?

查看:208
本文介绍了有人可以向我解释逻辑回归中成本函数和梯度下降方程之间的区别吗?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我要在Coursera上的Logistic回归上学习ML课程,同时还要进行Manning Book Machine Learning in Action.我正在尝试通过在Python中实现所有内容来学习.

I'm going through the ML Class on Coursera on Logistic Regression and also the Manning Book Machine Learning in Action. I'm trying to learn by implementing everything in Python.

我无法理解成本函数和梯度之间的差异.网上有一些示例,人们在其中计算成本函数,然后在某些地方没有,仅使用梯度下降函数w :=w - (alpha) * (delta)w * f(w).

I'm not able to understand the difference between the cost function and the gradient. There are examples on the net where people compute the cost function and then there are places where they don't and just go with the gradient descent function w :=w - (alpha) * (delta)w * f(w).

两者之间有什么区别?

推荐答案

成本函数是您要最小化的东西.例如,您的成本函数可能是整个训练集的平方误差之和.梯度下降是一种用于找到多个变量的最小值的方法.因此,您可以使用梯度下降来最小化成本函数.如果您的成本是K个变量的函数,那么梯度就是长度K向量,它定义了成本增长最快的方向.因此,在梯度下降中,您遵循梯度的负值到成本最小的点.如果有人在机器学习环境中谈论梯度下降,那么可能暗示了代价函数(这是您正在应用梯度下降算法的函数).

A cost function is something you want to minimize. For example, your cost function might be the sum of squared errors over your training set. Gradient descent is a method for finding the minimum of a function of multiple variables. So you can use gradient descent to minimize your cost function. If your cost is a function of K variables, then the gradient is the length-K vector that defines the direction in which the cost is increasing most rapidly. So in gradient descent, you follow the negative of the gradient to the point where the cost is a minimum. If someone is talking about gradient descent in a machine learning context, the cost function is probably implied (it is the function to which you are applying the gradient descent algorithm).

这篇关于有人可以向我解释逻辑回归中成本函数和梯度下降方程之间的区别吗?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆