可以使用Linux命令从HTTP服务器只读取前N个字节吗? [英] Is it possible to read only first N bytes from the HTTP server using Linux command?
问题描述
这是问题。
给定网址 http://www.example.com ,我们可以读取前N个字节通过使用 wget ,我们可以下载整个网页。 $ b $ b
使用 curl ,有-r,0-499指定前500个字节。似乎解决了这个问题。
您还应该注意许多HTTP / 1.1服务器没有启用此功能,
使用 urlib 在python中。类似问题这里,但根据Konstantin的评论,是真的吗?
这种技术失败,因为它实际上不可能从HTTP服务器只读指定数量的数据,即你隐式读取所有HTTP响应,然后只读前N个字节。因此,最后您最终下载了整个1Gb恶意响应。
因此,问题是我们如何在实践中从HTTP服务器读取前N个字节?
感谢
curl< url& | head -c 499
或
curl< url> | dd count = 499
应该
也有更简单的工具可能borader可用性如
netcat host 80<HERE| dd count = 499 of = output.fragment
GET / urlpath / query?string = more& bloddy = stuff
此处
或
GET / urlpath / query?string = more& bloddy = stuff
Here is the question.
Given the url http://www.example.com, can we read the first N bytes out of the page?
- using wget, we can download the whole page.
using curl, there is -r, 0-499 specifies the first 500 bytes. Seems solve the problem.
You should also be aware that many HTTP/1.1 servers do not have this feature enabled, so that when you attempt to get a range, you'll instead get the whole document.
using urlib in python. similar question here, but according to Konstantin's comment, is that really true?
Last time I tried this technique it failed because it was actually impossible to read from the HTTP server only specified amount of data, i.e. you implicitly read all HTTP response and only then read first N bytes out of it. So at the end you ended up downloading the whole 1Gb malicious response.
So the problem is that how can we read the first N bytes from the HTTP server in practice?
Regards & Thanks
curl <url> | head -c 499
or
curl <url> | dd count=499
should do
Also there are simpler utils with perhaps borader availability like
netcat host 80 <<"HERE" | dd count=499 of=output.fragment
GET /urlpath/query?string=more&bloddy=stuff
HERE
Or
GET /urlpath/query?string=more&bloddy=stuff
这篇关于可以使用Linux命令从HTTP服务器只读取前N个字节吗?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!