从PHP中的多个(2)文本文件中删除重复的行 [英] Removing duplicate lines from multiple (2) text files in PHP
问题描述
第一个.txt文件是卷曲数据(机器人),它总是得到2000.txt行包括新的
和第二个.txt文件有新的数据第一个.txt文件。
我使用脚本的第二个.txt文件。
我不能删除dublicates。 (我的意思是我试图根据旧的值获得新的值),所以脚本总是使用新的和旧的数据。
是否有打开所有文件的方式,删除重复项,并相应地保存到第二个文件?
有三个更新的例子
这里是第一次刷新和2 .txt文件
第一个.txt文件(你应该认为它有2000行)刷新卷曲机器人
Something here10
Something here9
Something here8
Something here7
Something here6
Something here5
Something here4
Something here3
Something here2
Something here1
第二个.txt文件,我将使用
Something here10
Something here9
Something here8
Something here7
Something here6
Something here5
Something here4
Something here3
Something here2
Something here1
这里是SECOND刷新和2 .txt文件
第一个.txt文件(你应该认为它有2000行)刷新curl bot
Something here14
Something here13
Something here12
Something here11
Something here10
Something here9
Something here8
Something here7
Something here6
Something here5
第二个.txt文件,我将使用
Something here14
Something here13
Something here12
Something here11
这里是THIRD刷新和2个.txt文件
first .txt文件(你应该认为它有2000行)refresh curl bot
Something here16
Something here15
这里的东西14
Som ething here13
Something here12
Something here11
Something here10
Something here9
Something here8
Something here7
第二个.txt文件,我将使用
Something here16
Something here15
编辑:
我发布了两个新的刷新
这里是第四次刷新和2 .txt文件
第一个.txt文件(你应该认为它有2000行)refresh curl bot
Something here20
Something here19
Something here18
Something here17
Something here16
Something here15
Something here14
Something here13
Something here12
Something here11
第二个.txt文件,我将使用
Something here20
Something here19
Something here18
在这里17
这里是FIFTH刷新和2个.txt文件
blockquote>
第一个.txt文件(你应该认为它有2000行)refresh curl bot
<$ pre $ $ $ $ $ $ $ $ $ $ $ $ $ $ $ $ $ $ $ $ $ $ $ $ $ $ $ $ $ $ $ $ $ $ $ $ $ $ $ $ $ $ $ $ $ $ $ $ $ $ $ $ $ $ $ $ $ $ b Something here17
Something here16
Something here15
second。 txt文件,我将使用
Something here24
Something here23
Something here22
Something here21
解决方案但是本质上是将每一行都推送到一个数组上,然后使用array_unique去除重复的数据:
$ b $ $ $ $ $ $ $ $ $ $ $ $ $ $ line_array = array() ;
$ files = getFiles();
foreach($ files as $ file)
{
$ lines = $ file-> getAllLines();
foreach($ lines as $ line)
{
$ line_array [] = $ line;
}
}
$ without_duplicates = array_unique($ line_array);
I have 2 .txt files. First .txt file is curl data (robot) and it always gets 2000 .txt lines including new ones
and the second .txt file has new data of first .txt file. I use the second .txt file for the script.
I cant remove dublicates. (I mean I try to get new values according to the old values) so script always use data with new and also old.
Is there a way to open all the files, remove duplicates and save the lines accordingly to second file?
THERE ARE THREE REFRESH EXAMPLES
here is FIRST refresh and 2 .txt files
first .txt file (you should think it has 2000 lines) refresh curl robot
Something here10 Something here9 Something here8 Something here7 Something here6 Something here5 Something here4 Something here3 Something here2 Something here1
second .txt file that i will use
Something here10 Something here9 Something here8 Something here7 Something here6 Something here5 Something here4 Something here3 Something here2 Something here1
here is SECOND refresh and 2 .txt files
first .txt file (you should think it has 2000 lines) refresh curl bot
Something here14 Something here13 Something here12 Something here11 Something here10 Something here9 Something here8 Something here7 Something here6 Something here5
second .txt file that i will use
Something here14 Something here13 Something here12 Something here11
here is THIRD refresh and 2 .txt files
first .txt file (you should think it has 2000 lines) refresh curl bot
Something here16 Something here15 Something here14 Something here13 Something here12 Something here11 Something here10 Something here9 Something here8 Something here7
second .txt file that i will use
Something here16 Something here15
EDIT: I posted two new refresh
here is FOURTH refresh and 2 .txt files
first .txt file (you should think it has 2000 lines) refresh curl bot
Something here20 Something here19 Something here18 Something here17 Something here16 Something here15 Something here14 Something here13 Something here12 Something here11
second .txt file that i will use
Something here20 Something here19 Something here18 Something here17
here is FIFTH refresh and 2 .txt files
first .txt file (you should think it has 2000 lines) refresh curl bot
Something here24 Something here23 Something here22 Something here21 Something here20 Something here19 Something here18 Something here17 Something here16 Something here15
second .txt file that i will use
Something here24 Something here23 Something here22 Something here21
解决方案I tried to keep this as high level as possible but in essence push each line onto an array and then use array_unique to remove duplicates:
$line_array = array(); $files = getFiles(); foreach($files as $file) { $lines = $file->getAllLines(); foreach($lines as $line) { $line_array[] = $line; } } $without_duplicates = array_unique($line_array);
这篇关于从PHP中的多个(2)文本文件中删除重复的行的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!