code高尔夫 - 十六进制(RAW)二进制转换 [英] Code golf - hex to (raw) binary conversion

查看:133
本文介绍了code高尔夫 - 十六进制(RAW)二进制转换的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

在回应这个问题问六角扳手(原始)二进制转换,评论认为,它可能在解决5-10线C,或任何其他语言。

我敢肯定,对于能否实现,想(部分)脚本语言,看看如何。我们可以证明,真正的意见,对C,也?

注:这并不意味着十六进制的 ASCII 的二进制 - 特别输出应该是对应的ASCII输入十六进制的原始字节流。此外,输入解析器应跳过/忽略空白。

修改(布赖恩·坎贝尔),我可以提出以下的规则,一致性?随意编辑或删除这些,如果你不认为这是有帮助的,但我认为,既然出现了某些情况下应该如何工作的一些讨论,一些澄清将是有益的。


  1. 的程序必须从标准输入读取和写入到stdout(我们也可以允许读取和写入命令行上传递的文件,但我无法想象那将是比stdin和stdout任何语言短)

  2. 程序只能使用程序包附带的底座,标准语言分布。在C / C ++的情况下,这意味着它们各自的标准库,而不是POSIX。

  3. 程序必须编译或运行不传递到编译器或跨preTER(所以,'GCC myprog.c中'或'蟒蛇myprog.py'或'红宝石myprog.rb'的任何特殊选项都OK,而红宝石-rscanf myprog.rb'是不允许的;要求/导入模块占用您的字符数)

  4. 程序应该阅读对相邻的十六进制数字(大写,小写或混合大小写),可选择用空格隔开psented整数字节重新$ P $,并编写相应的字节输出。每对十六进制数字被写入与大多数显著四位第一位。

  5. 在输入无效的程序的行为(字符除了 [A-FA-F \\ t \\ r \\ n] ,空间在单个字节两个字符分隔,奇数的输入十六进制数字)是未定义的;任何行为(除积极损坏用户的计算机或东西)上的坏输入是可以接受的(抛出一个错误,停止输出,忽略坏字符,处理一个字符为一个字节的值,都OK)

  6. 该程序可能没有写入附加字节输出。

  7. code是通过在源文件中最少的总字节数得分。 (或者,如果我们想要更真实的原始的挑战,比分将基于最低数量code的行;我会处以每行80个字符的限制在这种情况下,否则你会得到一串1系领带)的。


解决方案

修改跳棋减少了我的C解决方案,<一个href=\"http://stackoverflow.com/questions/795027/$c$c-golf-hex-to-raw-binary-conversion/795944#795944\">46字节,然后将其还原为感谢44个字节,从BillyONeal小费加上修正错误在我的部分(坏输入没有更多的无限循环,现在它只是终止循环)。请给予信贷跳棋为减少这种从77到46字节:

 主(我){而(scanf函数(%2X,&安培; I)0)的putchar(我);}

和我有一个更好的解决方案,红宝石比我上次在 42 38 的字节(感谢约书亚·斯旺克的正则表达式的建议):

  STDIN.read.scan(/ \\ s \\ S /){| X | putc将x.hex}

独创的解决办法

C,在77个字节,或code两行(是1,如果你可以把的#include 在同一行)。请注意,这对错误的输入一个无限循环;与跳棋和BillyONeal的帮助下,44字节的解决方案修复了这个bug,并简单地停在错误的输入。

 的#include&LT;&stdio.h中GT;
INT的main(){字符℃;而(scanf函数(%2X,&安培;!C)= EOF)的putchar(C);}

如果你平时格式化它甚至只有6行:

 的#include&LT;&stdio.h中GT;
诠释主(){
  焦炭℃;
  而(scanf函数(%2X,&安培;!C)= EOF)
    的putchar(C);
}

红宝石,79个字节(我敢肯定,这可以改善):

  STDOUT.write STDIN.read.scan(/ [^ \\ S] \\ S * [^ \\ S] \\ S * /){映射|:X | x.to_i( 16)}。包(C *)

这些都需要从标准输入的输入和输出到stdout

In response to this question asking about hex to (raw) binary conversion, a comment suggested that it could be solved in "5-10 lines of C, or any other language."

I'm sure that for (some) scripting languages that could be achieved, and would like to see how. Can we prove that comment true, for C, too?

NB: this doesn't mean hex to ASCII binary - specifically the output should be a raw octet stream corresponding to the input ASCII hex. Also, the input parser should skip/ignore white space.

edit (by Brian Campbell) May I propose the following rules, for consistency? Feel free to edit or delete these if you don't think these are helpful, but I think that since there has been some discussion of how certain cases should work, some clarification would be helpful.

  1. The program must read from stdin and write to stdout (we could also allow reading from and writing to files passed in on the command line, but I can't imagine that would be shorter in any language than stdin and stdout)
  2. The program must use only packages included with your base, standard language distribution. In the case of C/C++, this means their respective standard libraries, and not POSIX.
  3. The program must compile or run without any special options passed to the compiler or interpreter (so, 'gcc myprog.c' or 'python myprog.py' or 'ruby myprog.rb' are OK, while 'ruby -rscanf myprog.rb' is not allowed; requiring/importing modules counts against your character count).
  4. The program should read integer bytes represented by pairs of adjacent hexadecimal digits (upper, lower, or mixed case), optionally separated by whitespace, and write the corresponding bytes to output. Each pair of hexadecimal digits is written with most significant nibble first.
  5. The behavior of the program on invalid input (characters besides [a-fA-F \t\r\n], spaces separating the two characters in an individual byte, an odd number of hex digits in the input) is undefined; any behavior (other than actively damaging the user's computer or something) on bad input is acceptable (throwing an error, stopping output, ignoring bad characters, treating a single character as the value of one byte, are all OK)
  6. The program may write no additional bytes to output.
  7. Code is scored by fewest total bytes in the source file. (Or, if we wanted to be more true to the original challenge, the score would be based on lowest number of lines of code; I would impose an 80 character limit per line in that case, since otherwise you'd get a bunch of ties for 1 line).

解决方案

edit Checkers has reduced my C solution to 46 bytes, which was then reduced to 44 bytes thanks to a tip from BillyONeal plus a bugfix on my part (no more infinite loop on bad input, now it just terminates the loop). Please give credit to Checkers for reducing this from 77 to 46 bytes:

main(i){while(scanf("%2x",&i)>0)putchar(i);}

And I have a much better Ruby solution than my last, in 42 38 bytes (thanks to Joshua Swank for the regexp suggestion):

STDIN.read.scan(/\S\S/){|x|putc x.hex}

original solutions

C, in 77 bytes, or two lines of code (would be 1 if you could put the #include on the same line). Note that this has an infinite loop on bad input; the 44 byte solution with the help of Checkers and BillyONeal fixes the bug, and simply stops on bad input.

#include <stdio.h>
int main(){char c;while(scanf("%2x",&c)!=EOF)putchar(c);}

It's even just 6 lines if you format it normally:

#include <stdio.h>
int main() {
  char c;
  while (scanf("%2x",&c) != EOF)
    putchar(c);
}

Ruby, 79 bytes (I'm sure this can be improved):

STDOUT.write STDIN.read.scan(/[^\s]\s*[^\s]\s*/).map{|x|x.to_i(16)}.pack("c*")

These both take input from STDIN and write to STDOUT

这篇关于code高尔夫 - 十六进制(RAW)二进制转换的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆