如何使用awk对重复行的值求和? [英] How to sum values of duplicate rows with awk?

查看:253
本文介绍了如何使用awk对重复行的值求和?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我有一个11行的csv文件,如下所示:

I have a csv file with 11 rows like this:

Order Date,Username,Order Number,No Resi,Quantity,Title,Update Date,Status,Price Per Item,Status Tracking,Alamat
05 Jun 2018,Mildred@email.com,205583995140400,,2,Gold,05 Jun 2018 – 10:01,In Process,Rp3.000.000,Done,Syahrul Address
05 Jun 2018,Mildred@email.com,205583995140400,,1,Gold,05 Jun 2018 – 10:01,In Process,Rp3.000.000,Done,Syahrul Address
05 Jun 2018,Martha@email.com,205486016644400,,2,Gold,05 Jun 2018 – 10:01,In Process,Rp3.000.000,Done,Faishal  Address
05 Jun 2018,Martha@email.com,205486016644400,,2,Gold,05 Jun 2018 – 10:01,In Process,Rp3.000.000,Done,Faishal  Address
05 Jun 2018,Misty@email.com,205588935534900,,2,Gold,05 Jun 2018 – 10:01,In Process,Rp3.000.000,Done,Rutwan Address
05 Jun 2018,Misty@email.com,205588935534900,,1,Gold,05 Jun 2018 – 10:01,In Process,Rp3.000.000,Done,Rutwan Address

我想删除该文件中的重复项,并对Quantity行中的值求和.我希望结果是这样的:

I want to remove the duplicates in that file and sum the values in the Quantity row. I want the result to be like this:

Order Date,Username,Order Number,No Resi,Quantity,Title,Update Date,Status,Price Per Item,Status Tracking,Alamat
05 Jun 2018,Mildred@email.com,205583995140400,,3,Gold,05 Jun 2018 – 10:01,In Process,Rp3.000.000,Done,Syahrul Address
05 Jun 2018,Martha@email.com,205486016644400,,4,Gold,05 Jun 2018 – 10:01,In Process,Rp3.000.000,Done,Faishal  Address
05 Jun 2018,Misty@email.com,205588935534900,,3,Gold,05 Jun 2018 – 10:01,In Process,Rp3.000.000,Done,Rutwan Address

我只想对Quantity行中的值求和,而其余部分保持不变.我已经在这个问题中尝试了该解决方案,但只有在文件只有2行,我只有11行,所以它不起作用.我如何用awk做到这一点?

I only want to sum the values in the Quantity row while leaving the rest as it is. I have tried the solution in this question but the answer there only works if the file only has 2 rows, I have 11 and so it doesn't work. How do I do it with awk ?

推荐答案

直接从Karafka的解决方案中进行适应,并在其中添加一些代码,以按照OP的要求以正确的顺序(在Input_file中存在这些行)请求.

Taking direct adaption from Karafka's solution and adding some code in it a bit to get the lines in proper order(in which they are present in Input_file) as per OP's request.

awk -F, '
FNR==1{
  print;
  next}
{
  val=$5;
  $5="~";
  a[$0]+=val
}
!b[$0]++{
  c[++count]=$0}
END{
  for(i=1;i<=count;i++){
     sub("~",a[c[i]],c[i]);
     print c[i]}
}' OFS=,   Input_file

说明: 现在也向上述代码添加了说明.

Explanation: Adding explanation to above code too now.

awk -F, '                         ##Setting field separator as comma here.
FNR==1{                           ##Checking condition if line number is 1 then do following.
  print;                          ##Print the current line.
  next}                           ##next will skip all further statements from here.
{
  val=$5;                         ##Creating a variable named val whose value is 5th field of current line.
  $5="~";                         ##Setting value of 5th field as ~ here to keep all lines same(to create index for array a).
  a[$0]+=val                      ##Creating an array named a whose index is current line and its value is variable val value.
}
!b[$0]++{                         ##Checking if array b whose index is current line its value is NULL then do following.
  c[++count]=$0}                  ##Creating an array named c whose index is variable count increasing value with 1 and value is current line.
END{                              ##Starting END block of awk code here.
  for(i=1;i<=count;i++){          ##Starting a for loop whose value starts from 1 to till value of count variable.
     sub("~",a[c[i]],c[i]);       ##Substituting ~ in value of array c(which is actually lines value) with value of SUMMED $5.
     print c[i]}                  ##Printing newly value of array c where $5 is now replaced with its actual value.
}' OFS=, Input_file               ##Setting OFS as comma here and mentioning Input_file name here too.

这篇关于如何使用awk对重复行的值求和?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆