取双[]散列图中每个索引的平均值,并将其分配给输出double [] [英] take the average of each index in a double[] hash map and assign it to an output double[]

查看:128
本文介绍了取双[]散列图中每个索引的平均值,并将其分配给输出double []的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我想按照 这个实现平均感知器算法说明 (第48页,完整填写psuedocode)。

我觉得我非常接近,但我无法找出最后一步,我需要计算每次迭代计算的权重的平均值为每个特定的索引,然后将该值分配给最终的权重数组。我将如何实现?

散列表的结构是 int ,它是迭代次数,然后是 double [] 与该迭代的权重。所以我猜输出结果会像

 对于所有的hashmap键
这个关键指标
...某物

所以如果每次迭代的第一个权重是 2 4 ,, 3 ,我想分配权重将 3 添加到该索引的最终 double [] 数组中,以此类推。

以下是相关的代码。完整的代码是在我的GitHub ,以防您想要查看它。

  //存储要平均的权重。 
Map< Integer,double []> cached_weights = new HashMap< Integer,double []>();


final int globoDictSize = globoDict.size(); //特征的数量

//权重总数32(输入变量31和偏移1)
double []权重=新双[globoDictSize + 1];
for(int i = 0; i {
//权重[i] = Math.floor(Math.random()* 10000)/ 10000 ;
//权重[i] = randomNumber(0,1);
权重[i] = 0.0;
}


int inputSize = trainingPerceptronInput.size();
double [] outputs = new double [inputSize];
final double [] [] a = Prcptrn_InitOutpt.initializeOutput(trainingPerceptronInput,globoDictSize,outputs,LABEL);


double globalError;
int iteration = 0;
do
{
iteration ++;
globalError = 0;
//遍历所有实例(完成一个时期)
for(int p = 0; p< inputSize; p ++)
{
//计算预测的类
double output = Prcptrn_CalcOutpt.calculateOutput(THETA,weights,a,p);
//预测值与实际类值之差
//总是为零或者一个
double localError = outputs [p] - output;

int i;
for(i = 0; i {
weights [i] + = LEARNING_RATE * localError * a [i] [p];
}
权重[i] + = LEARNING_RATE * localError;

//平方误差的总和(所有实例的误差值)
globalError + = localError * localError;
}

以上是我上面提到的部分


$ (Entry< Integer,double []> entry:cached_weights.entrySet())
{
$ b pre $ //计算平均值
b int key = entry.getKey();
double [] value = entry.getValue();
// ...
}

/ *均方根误差* /
//System.out.println(\"Iteration+ iteration +: RMSE =+ Math.sqrt(globalError / inputSize)); (globalError!= 0&& iteration< = MAX_ITER);
}



//计算平均值
Iterator it = cached_weights.entrySet()。iterator(); (it.hasNext())
{
Map.Entry pair =(Map.Entry)it.next();
System.out.println(pair.getKey()+=+ pair.getValue());

it.remove(); //避免了ConcurrentModificationException
}


解决方案

I猜测像这样会起作用:

  //计算平均值
for(Entry< Integer,double []>条目:cached_weights.entrySet())
{
int key = entry.getKey();
double [] value = entry.getValue();
AVERAGED_WEIGHTS [key - 1] + = value [key - 1];
}

但是,必须使用一些术语来除以结束我猜想如果键在键的末尾,那么就不会有更大的迭代,如果是这样的话,那就除以它,就像那样。

也许这样?

  //计算平均值
for(Entry< Integer,double []> entry:cached_weights.entrySet())
{
int key = entry.getKey();
double [] value = entry.getValue();
AVERAGED_WEIGHTS [key - 1] + = value [key - 1];

if(key == iteration)
{
AVERAGED_WEIGHTS [key - 1] / = key;
}
}


I want to implement the averaged perceptron algorithm, in accordance with this description (page 48, complete with psuedocode).

I think I'm pretty close, but I'm having trouble trying to figure out the last step wherein I need to compute the average of the weights calculated during every iteration for each particular index, and then assign that value to a final array of weights. How would I implement that?

The structure of the hashmap is int which is the number of iterations, and then an array of double[] with the weights for that iteration. so i guess the output would be something like

For all the hashmap keys
    for the length of the hashmap value at this key index
        ...something

So if each iteration the first weight was 2, 4, ,3, I want to assign the weight of 3 to the final double[] array for that index, and so on for all the instances.

Below is the pertinent code. The full code is here on my GitHub in case you'd like to check it out.

   //store weights to be averaged. 
   Map<Integer,double[]> cached_weights = new HashMap<Integer,double[]>();


   final int globoDictSize = globoDict.size(); // number of features

   // weights total 32 (31 for input variables and one for bias)
   double[] weights = new double[globoDictSize + 1];
   for (int i = 0; i < weights.length; i++) 
   {
       //weights[i] = Math.floor(Math.random() * 10000) / 10000;
       //weights[i] = randomNumber(0,1);
       weights[i] = 0.0;
   }


   int inputSize = trainingPerceptronInput.size();
   double[] outputs = new double[inputSize];
   final double[][] a = Prcptrn_InitOutpt.initializeOutput(trainingPerceptronInput, globoDictSize, outputs, LABEL);


   double globalError;
   int iteration = 0;
   do 
   {
       iteration++;
       globalError = 0;
       // loop through all instances (complete one epoch)
       for (int p = 0; p < inputSize; p++) 
       {
           // calculate predicted class
           double output = Prcptrn_CalcOutpt.calculateOutput(THETA, weights, a, p);
           // difference between predicted and actual class values
           //always either zero or one
           double localError = outputs[p] - output;

           int i;
           for (i = 0; i < a.length; i++) 
           {
               weights[i] += LEARNING_RATE * localError * a[i][p];
           }
           weights[i] += LEARNING_RATE * localError;

           // summation of squared error (error value for all instances)
           globalError += localError * localError;
       }

here's the part I was mentioning above

       //calc averages
       for (Entry<Integer, double[]> entry : cached_weights.entrySet()) 
       {
            int key = entry.getKey();
            double[] value = entry.getValue();
            // ...
        }

       /* Root Mean Squared Error */
       //System.out.println("Iteration " + iteration + " : RMSE = " + Math.sqrt(globalError / inputSize));
   } 
   while (globalError != 0 && iteration <= MAX_ITER);


   //calc averages
   Iterator it = cached_weights.entrySet().iterator();
   while( it.hasNext() ) 
   {
       Map.Entry pair = (Map.Entry)it.next();
       System.out.println(pair.getKey() + " = " + pair.getValue());

       it.remove(); // avoids a ConcurrentModificationException
   }

解决方案

I guess something like this would work:

   //calc averages
   for (Entry<Integer, double[]> entry : cached_weights.entrySet()) 
   {
        int key = entry.getKey();
        double[] value = entry.getValue();
        AVERAGED_WEIGHTS[ key - 1 ] +=  value[ key - 1 ]; 
    }

But then, must make some term for dividing by the # of iterations at the end I guess

like if key is at the end of the key, there's no more bigger iterations, if that is the case, then divide by it, something like that.

maybe this?

   //calc averages
   for (Entry<Integer, double[]> entry : cached_weights.entrySet()) 
   {
        int key = entry.getKey();
        double[] value = entry.getValue();
        AVERAGED_WEIGHTS[ key - 1 ] +=  value[ key - 1 ]; 

        if (key == iteration) 
        {
            AVERAGED_WEIGHTS[ key - 1 ] /= key;
        }
    }

这篇关于取双[]散列图中每个索引的平均值,并将其分配给输出double []的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆