加载数据 infile 并计算 mysql 中的重复行 [英] load data infile with counting duplicate rows in mysql

查看:54
本文介绍了加载数据 infile 并计算 mysql 中的重复行的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我有一个包含重复记录的单列 csv 文件.我有超过一万条记录.所以我正在使用 LOAD DATA LOCAL INFILE

I have a single column csv file containing duplicate records. I have more than ten thousands records.So I am using LOAD DATA LOCAL INFILE

示例数据:

ID

1
2
3
2
2
1
2

mysql 表被称为 'huts'.

The mysql table is called 'huts'.

我的第一个问题是在mysql表'huts'中插入数据时是否可以计算每行有多少重复行.所以我想看到如下填充的huts表

My first question is if it is possible to count how many duplicate rows are there for each row while inserting the data in mysql table 'huts'.So I would like to see the populated huts table as below

ID    count

1     2
2     4
3     1

我的第二个问题是,如果以上是不可能的,那么我当前的工作代码返回以下内容

My second question is if the above is not possible then my current working code returns the following

ID

1 
2 
3 

$insert_query = "LOAD DATA LOCAL INFILE 'test.csv' 
                    INTO table huts
                    (id)
                    ";
    if (!mysql_query($insert_query)) {
         echo "Can't insert student record : " . mysql_error($connection);
    } else {
         echo "You have successfully insert records into huts table";
    }

表结构为

CREATE TABLE `huts` (
 `id` int(11) NOT NULL AUTO_INCREMENT,
 PRIMARY KEY (`id`)
) ENGINE=MyISAM AUTO_INCREMENT=116 DEFAULT CHARSET=latin1

数据库存储引擎是'MyISAM'

The database storage engine is 'MyISAM'

没有声明唯一键,但为什么查询会忽略重复的行.我希望查询应该插入所有行,无论是否有任何重复.

There is no unique key declared but why the query is ignoring the duplicate rows.I expected that the query should insert all the rows no matter if there is any duplicate.

推荐答案

如果您不关心重复数据,请从表字段中删除自动增量和主键选项

Remove the Auto increment and primary key options from your table field if you don't care about duplicated data

这篇关于加载数据 infile 并计算 mysql 中的重复行的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆