如何可靠地猜测MacRoman,CP1252,Latin1,UTF-8和ASCII之间的编码 [英] How to reliably guess the encoding between MacRoman, CP1252, Latin1, UTF-8, and ASCII

查看:206
本文介绍了如何可靠地猜测MacRoman,CP1252,Latin1,UTF-8和ASCII之间的编码的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

在工作中似乎没有一个星期,没有一些编码相关的连接,灾难或灾难。问题通常来自程序员,他们认为他们可以可靠地处理文本文件而不指定编码。但是你不能。



因此,决定从今以后禁止文件以 *。txt *。text 。想法是,这些扩展误导了偶然程序员对编码的沉默自满,这导致不当的处理。几乎没有
的扩展,因为至少你知道,你不知道你有什么。



但是,我们不是去那么远。相反,您将需要使用以编码结尾的文件名。所以对于文本文件,例如,这些将是 README.ascii README.latin1 README.utf8 等。



对于需要特定扩展名的文件,如果可以在文件本身内指定编码,例如在Perl或Python中,那么你应该这样做。对于像Java源文件那样的文件内部没有这样的工具的文件,你将把编码放在扩展之前,例如 SomeClass-utf8.java



对于输出,UTF-8要强烈



找出如何处理名为 *。txt 的数千个文件。我们想重命名他们所有的,以适应我们的新标准。但我们不可能眼睛他们所有。因此,我们需要一个实际工作的库或程序。



这些是各种ASCII,ISO-8859-1,UTF-8,Microsoft CP1252或苹果MacRoman。虽然我们知道我们可以知道是否有什么是ASCII,并且我们知道如果某个东西可能是UTF-8,我们会有一个很好的变化,我们对8位编码感到困惑。因为我们在混合的Unix环境(Solaris,Linux,Darwin)中运行,大多数桌面都是Mac,我们有很多恼人的MacRoman文件。这些特别是一个问题。



一段时间以来,我一直在寻找一种方法来以编程方式确定


  1. ASCII

  2. ISO-8859-1

  3. CP1252

  4. MacRoman

  5. UTF-8

一个文件在,可以可靠地区分这三种不同的8位编码的程序或库。我们可能只有一千个MacRoman文件,所以我们使用的字符集检测器必须能够嗅出这些。没有我看过可以管理的伎俩。我对 ICU字符集检测器库寄予厚望,但它无法处理MacRoman。我也看过模块在Perl和Python中做同样的事情,但一次又一次地是同一个故事:不支持检测MacRoman。



因此,我正在寻找的是一个现有的库或程序,它可靠地确定文件所在的五种编码中的哪一种,并且最好多于该编码。特别是它必须区分我引用的三个3位编码,特别是MacRoman 。文件超过99%的英文文字;



如果是库代码,我们的语言偏好是以Perl,C,Java或Python编写,并按此顺序。如果它只是一个程序,那么我们不关心它是什么语言,只要它是完整的源代码,运行在Unix上,完全不受阻碍。



有没有人有这个问题,一个zillion遗留文本文件随机编码?如果是,你是如何尝试解决它,你是多么成功?这是我的问题的最重要的方面,但我也很感兴趣,你是否认为鼓励程序员命名(或重命名)他们的文件与实际编码这些文件,将有助于我们避免未来的问题。有没有人试图在制度的基础上强制执行这个操作,如果是这样,是否成功,为什么?



我完全理解为什么不能保证一个明确的答案给定的问题的性质。这是特别是小文件,你没有足够的数据继续下去的情况。幸运的是,我们的文件很少。除了随机的 README 文件,大多数都在50k到250k的大小范围内,并且许多更大。



问题域是生物医学文本挖掘,因此我们有时会处理广泛和极大的语料库,例如所有的PubMedCentral的开放获取仓库。一个相当大的文件是BioThesaurus 6.0,在5.7千兆字节。这个文件是特别烦人,因为它几乎所有UTF-8。然而,一些numbskull去和卡住了几行,在一些8位编码 - 微软CP1252,我相信。它需要相当一段时间,你在那个旅行之前。 :(

解决方案

首先,容易的情况:



/ h2>

如果您的数据不包含0x7F以上的字节,那么它是ASCII码(或者是一个7位ISO646编码,但是这些都是过时的)。



UTF-8



如果您的数据以UTF-8格式验证,那么您可以放心地认为 UTF -8。由于UTF-8的严格验证规则,假阳性非常罕见。



ISO-8859-1与Windows-1252



这两种编码之间的唯一区别是,ISO-8859-1具有C1控制字符,其中windows-1252具有可打印的字符€,ƒ...†‡‰Š<ŒŽ - --~™š>œžŸ。我看过许多使用卷曲引号或破折号的文件,但是没有使用C1控制字符的文件,所以不要用它们,或者ISO-8859-1,只是检测



你如何区分MacRoman和cp1252?



这很麻烦。



未定义的字符



字节0x81,0x8D,0x8F,0x90,0x9D不在Windows-1252中使用。



相同的字符



字节0xA2(¢)在两个编码中,0xA3(£),0xA9(©),0xB1(±),0xB5(μ)恰好相同。如果这些是唯一的非ASCII字节,则无论是选择MacRoman还是cp1252都没关系。



统计方法



数据中的字符(NOT字节!)频率,你知道是UTF-8。确定最常用的字符。然后使用这些数据来确定cp1252或MacRoman字符是否更常见。



例如,在我刚刚对100个随机英文维基百科文章执行的搜索中,非ASCII字符为··-é°®'èö - 。基于此事实,




  • 字节0x92,0x95,0x96,0x97,0xAE,0xB0,0xB7,0xE8,0xE9或0xF6建议

  • 字节0x8E,0x8F,0x9A,0xA1,0xA5,0xA8,0xD0,0xD1,0xD5或0xE1表示MacRoman。 ul>

    计数cp1252建议字节和MacRoman建议字节,并以最大的取值。


    At work it seems like no week ever passes without some encoding-related conniption, calamity, or catastrophe. The problem usually derives from programmers who think they can reliably process a "text" file without specifying the encoding. But you can't.

    So it's been decided to henceforth forbid files from ever having names that end in *.txt or *.text. The thinking is that those extensions mislead the casual programmer into a dull complacency regarding encodings, and this leads to improper handling. It would almost be better to have no extension at all, because at least then you know that you don’t know what you’ve got.

    However, we aren’t goint to go that far. Instead you will be expected to use a filename that ends in the encoding. So for text files, for example, these would be something like README.ascii, README.latin1, README.utf8, etc.

    For files that demand a particular extension, if one can specify the encoding inside the file itself, such as in Perl or Python, then you shall do that. For files like Java source where no such facility exists internal to the file, you will put the encoding before the extension, such as SomeClass-utf8.java.

    For output, UTF-8 is to be strongly preferred.

    But for input, we need to figure out how to deal with the thousands of files in our codebase named *.txt. We want to rename all of them to fit into our new standard. But we can’t possibly eyeball them all. So we need a library or program that actually works.

    These are variously in ASCII, ISO-8859-1, UTF-8, Microsoft CP1252, or Apple MacRoman. Although we're know we can tell if something is ASCII, and we stand a good change of knowing if something is probably UTF-8, we’re stumped about the 8-bit encodings. Because we’re running in a mixed Unix environment (Solaris, Linux, Darwin) with most desktops being Macs, we have quite a few annoying MacRoman files. And these especially are a problem.

    For some time now I’ve been looking for a way to programmatically determine which of

    1. ASCII
    2. ISO-8859-1
    3. CP1252
    4. MacRoman
    5. UTF-8

    a file is in, and I haven’t found a program or library that can reliably distinguish between those the three different 8-bit encodings. We probably have over a thousand MacRoman files alone, so whatever charset detector we use has to be able to sniff those out. Nothing I’ve looked at can manage the trick. I had big hopes for the ICU charset detector library, but it cannot handle MacRoman. I’ve also looked at modules to do the same sort of thing in both Perl and Python, but again and again it’s always the same story: no support for detecting MacRoman.

    What I am therefore looking for is an existing library or program that reliably determines which of those five encodings a file is in—and preferably more than that. In particular it has to distinguish between the three 3-bit encoding I’ve cited, especially MacRoman. The files are more than 99% English language text; there are a few in other languages, but not many.

    If it’s library code, our language preference is for it to be in Perl, C, Java, or Python, and in that order. If it’s just a program, then we don’t really care what language it’s in so long as it comes in full source, runs on Unix, and is fully unencumbered.

    Has anyone else had this problem of a zillion legacy text files randomly encoded? If so, how did you attempt to solve it, and how successful were you? This is the most important aspect of my question, but I’m also interested in whether you think encouraging programmers to name (or rename) their files with the actual encoding those files are in will help us avoid the problem in the future. Has anyone ever tried to enforce this on an institutional basis, and if so, was that successful or not, and why?

    And yes, I fully understand why one cannot guarantee a definite answer given the nature of the problem. This is especially the case with small files, where you don’t have enough data to go on. Fortunately, our files are seldom small. Apart from the random README file, most are in the size range of 50k to 250k, and many are larger. Anything more than a few K in size is guaranteed to be in English.

    The problem domain is biomedical text mining, so we sometimes deal with extensive and extremely large corpora, like all of PubMedCentral’s Open Access respository. A rather huge file is the BioThesaurus 6.0, at 5.7 gigabytes. This file is especially annoying because it is almost all UTF-8. However, some numbskull went and stuck a few lines in it that are in some 8-bit encoding—Microsoft CP1252, I believe. It takes quite a while before you trip on that one. :(

    解决方案

    First, the easy cases:

    ASCII

    If your data contains no bytes above 0x7F, then it's ASCII. (Or a 7-bit ISO646 encoding, but those are very obsolete.)

    UTF-8

    If your data validates as UTF-8, then you can safely assume it is UTF-8. Due to UTF-8's strict validation rules, false positives are extremely rare.

    ISO-8859-1 vs. windows-1252

    The only difference between these two encodings is that ISO-8859-1 has the C1 control characters where windows-1252 has the printable characters €‚ƒ„…†‡ˆ‰Š‹ŒŽ‘’""•–—˜™š›œžŸ. I've seen plenty of files that use curly quotes or dashes, but none that use C1 control characters. So don't even bother with them, or ISO-8859-1, just detect windows-1252 instead.

    That now leaves you with only one question.

    How do you distinguish MacRoman from cp1252?

    This is a lot trickier.

    Undefined characters

    The bytes 0x81, 0x8D, 0x8F, 0x90, 0x9D are not used in windows-1252. If they occur, then assume the data is MacRoman.

    Identical characters

    The bytes 0xA2 (¢), 0xA3 (£), 0xA9 (©), 0xB1 (±), 0xB5 (µ) happen to be the same in both encodings. If these are the only non-ASCII bytes, then it doesn't matter whether you choose MacRoman or cp1252.

    Statistical approach

    Count character (NOT byte!) frequencies in the data you know to be UTF-8. Determine the most frequent characters. Then use this data to determine whether the cp1252 or MacRoman characters are more common.

    For example, in a search I just performed on 100 random English Wikipedia articles, the most common non-ASCII characters are ·•–é°®’èö—. Based on this fact,

    • The bytes 0x92, 0x95, 0x96, 0x97, 0xAE, 0xB0, 0xB7, 0xE8, 0xE9, or 0xF6 suggest windows-1252.
    • The bytes 0x8E, 0x8F, 0x9A, 0xA1, 0xA5, 0xA8, 0xD0, 0xD1, 0xD5, or 0xE1 suggest MacRoman.

    Count up the cp1252-suggesting bytes and the MacRoman-suggesting bytes, and go with whichever is greatest.

    这篇关于如何可靠地猜测MacRoman,CP1252,Latin1,UTF-8和ASCII之间的编码的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆