向量化的正则Logistic约束 [英] Regularized logistic regresion with vectorization

查看:52
本文介绍了向量化的正则Logistic约束的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我正在尝试实现正则逻辑回归的向量化版本.我发现了

并且必须考虑所有thetas

对于梯度下降,我认为这个等式是等效的:

  L = eye(length(theta));L(1,1)= 0;梯度=(1/m * X'*(hx-y)+(λ*(L * theta)/m). 

解决方案

我也是这里的新人...

Matlab 中,索引从1开始,在数学索引中,索引从0开始(您提到的公式上的索引也从0开始).

因此,理论上,theta的第一项也需要放在等式之外.

关于第二个问题,您说对了!这是一个等效的干净方程式!

I'm trying to implement a vectorized version of the regularised logistic regression. I have found a post that explains the regularised version but I don't understand it.

To make it easy I will copy the code below:

hx = sigmoid(X * theta);
m = length(X);
J = (sum(-y' * log(hx) - (1 - y') * log(1 - hx)) / m) + lambda * sum(theta(2:end).^2) / (2*m);
grad =((hx - y)' * X / m)' + lambda .* theta .* [0; ones(length(theta)-1, 1)] ./ m ;

I understand the first part of the Cost equation, If I'm correct it could be represented as:

J = ((-y' * log(hx)) - ((1-y)' * log(1-hx)))/m; 

The problem it's the regularization term. Let's take more detail:

Dimensions:

X = (m x (n+1))
theta = ((n+1) x 1)

I don't understand why he let the first term of theta (theta_0) outside of the equation, when in theory the regularized term it's:

and it has to take into account all the thetas

For the gradient descent, I think that this equation it's equivalent:

L = eye(length(theta));
L(1,1) = 0;

grad = (1/m * X'* (hx - y)+ (lambda*(L*theta)/m).

解决方案

I'm also new here...

In Matlab indexes begin from 1, and in mathematic indexes begin from 0 (the indexes on the formula which you mentioned are also beginning from 0).

So, in theory, the first term of theta also needs to be let outside of the equation.

And as for your second question, you right! It is an equivalent clean equation!

这篇关于向量化的正则Logistic约束的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆