使用curl循环访问URL的Shell脚本 [英] Shell script using curl to loop through urls
本文介绍了使用curl循环访问URL的Shell脚本的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!
问题描述
我一直在尝试创建一个简单的脚本,该脚本将从.txt文件中获取查询列表,附加主url变量,然后抓取内容并将其输出到文本文件.
I've been trying to create a simple script that will take a list of queries from a .txt file, append the main url variable, then scrape the content and output it to a text file.
这是我到目前为止所拥有的:
Here's what I have so far:
#!/bin/bash
url="example.com/?q="
for i in $(cat query.txt); do
content=$(curl -o $url $i)
echo $url $i
echo $content >> output.txt
done
列表:
images
news
stuff
other
错误日志:
curl: (6) Could not resolve host: other; nodename nor servname provided, or not known
example.com/?q= other
如果我直接从命令行使用此命令,则会将一些输出输出到文件中:
If I use this command straight from the command line I get some output into the file:
curl -L http://example.com/?q=other >> output.txt
最终,我希望输出为:
fetched: http://example.com/?q=other
content: the output of the page
followed by the next query in the list.
推荐答案
使用更多引号!
- http://mywiki.wooledge.org/Quotes
- http://mywiki.wooledge.org/Arguments
- http://wiki.bash-hackers.org/syntax/words
- http://mywiki.wooledge.org/Quotes
- http://mywiki.wooledge.org/Arguments
- http://wiki.bash-hackers.org/syntax/words
尝试以下方法:
url="example.com/?q="
for i in $(cat query.txt); do
content="$(curl -s "$url/$i")"
echo "$content" >> output.txt
done
这篇关于使用curl循环访问URL的Shell脚本的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!
查看全文