纯bash中的字符串规范化 [英] String normalization in pure bash
问题描述
字符'É'(E\xcc\x81
)和'É'(\xc3\x89
)具有不同的代码点.它们看起来完全相同,但是当测试匹配项时结果为负.
The characters 'É' (E\xcc\x81
) and 'É' (\xc3\x89
) have different code points. They look identical, yet when testing for a match the result is negative.
Python可以对它们进行规范化,但是:unicodedata.normalize('NFC', 'É'.decode('utf-8')) == unicodedata.normalize('NFC', 'É'.decode('utf-8'))
返回True
.它打印为É.
Python can normalize them, though: unicodedata.normalize('NFC', 'É'.decode('utf-8')) == unicodedata.normalize('NFC', 'É'.decode('utf-8'))
returns True
. And it prints as É.
问题:有没有办法标准化纯bash * 中的字符串?我已经研究过iconv
,但据我所知它可以转换为ascii,但不能进行归一化.
Question: is there a way to normalize strings in pure bash*? I've looked into iconv
but as far as I know it can do a conversion to ascii but no normalization.
* GNU bash版本3.2.57(1)-发行版(x86_64-apple-darwin14))
*GNU bash, version 3.2.57(1)-release (x86_64-apple-darwin14))
推荐答案
如果您 uconv 可用,可能会完成任务:
If you have uconv available, that'll probably do the job:
$ echo -en "E\xcc\x81" | uconv -x Any-NFC | hexdump -C
00000000 c3 89
$ echo -en "\xc3\x89" | uconv -x Any-NFC | hexdump -C
00000000 c3 89
Any-NFD
也可用于分解形式.
这篇关于纯bash中的字符串规范化的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!