从谷歌搜索下载前1000个图像 [英] Download first 1000 images from google search

查看:137
本文介绍了从谷歌搜索下载前1000个图像的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我搜索谷歌图片

http://www.google.com/search?hl=en&q=panda&bav=on.2,or.r_gc.r_pw.r_cp.r_qf.,cf.osb& ; biw = 1287& bih = 672& um = 1& ie = UTF-8& tbm = isch& source = og& sa = N& tab = wi& ei = qW4FUJigJ4jWtAbToInABg

,结果是数千张照片。我正在寻找一个shell脚本,它将下载第一个 n 图像,例如1000或500.

and the result is thousands of photos. I am looking for a shell script that will download the first n images, for example 1000 or 500.

如何我可以这样做吗?

我想我需要一些高级正则表达式或类似的东西。我尝试了很多东西但没有用,有人可以帮我吗?

I guess I need some advanced regular expressions or something like that. I was trying many things but to no avail, can someone help me please?

推荐答案

更新3:我修复了脚本以使用phantomjs 2.x。

update 3: I fixed the script to work with phantomjs 2.x.

更新2:我修改了脚本以使用phantomjs。安装起来比较困难,但至少可以再次使用。 http://sam.nipl.net/b/google-images http://sam.nipl.net/b/google-images.js

update 2: I modified the script to use phantomjs. It's harder to install, but at least it works again. http://sam.nipl.net/b/google-images http://sam.nipl.net/b/google-images.js

更新1:不幸的是,这不再适用。现在需要Javascript和其他魔法才能找到图像所在的位置。以下是雅虎图片搜索脚本的一个版本: http:// sam。 nipl.net/code/nipl-tools/bin/yimg

update 1: Unfortunately this no longer works. It seems Javascript and other magic is now required to find where the images are located. Here is a version of the script for yahoo image search: http://sam.nipl.net/code/nipl-tools/bin/yimg

原始回答:我为此共同攻击了一些内容。我通常会编写较小的工具并一起使用它们,但是你要求一个shell脚本,而不是三十个。这是故意密集的代码。

original answer: I hacked something together for this. I normally write smaller tools and use them together, but you asked for one shell script, not three dozen. This is deliberately dense code.

http://sam.nipl.net/code/nipl-tools/bin/google-images

它似乎很有效到目前为止。如果你能改进它,请告诉我,或者建议任何更好的编码技巧(假设它是一个shell脚本)。

It seems to work very well so far. Please let me know if you can improve it, or suggest any better coding techniques (given that it's a shell script).

#!/bin/bash
[ $# = 0 ] && { prog=`basename "$0"`;
echo >&2 "usage: $prog query count parallel safe opts timeout tries agent1 agent2
e.g. : $prog ostrich
       $prog nipl 100 20 on isz:l,itp:clipart 5 10"; exit 2; }
query=$1 count=${2:-20} parallel=${3:-10} safe=$4 opts=$5 timeout=${6:-10} tries=${7:-2}
agent1=${8:-Mozilla/5.0} agent2=${9:-Googlebot-Image/1.0}
query_esc=`perl -e 'use URI::Escape; print uri_escape($ARGV[0]);' "$query"`
dir=`echo "$query_esc" | sed 's/%20/-/g'`; mkdir "$dir" || exit 2; cd "$dir"
url="http://www.google.com/search?tbm=isch&safe=$safe&tbs=$opts&q=$query_esc" procs=0
echo >.URL "$url" ; for A; do echo >>.args "$A"; done
htmlsplit() { tr '\n\r \t' ' ' | sed 's/</\n</g; s/>/>\n/g; s/\n *\n/\n/g; s/^ *\n//; s/ $//;'; }
for start in `seq 0 20 $[$count-1]`; do
wget -U"$agent1" -T"$timeout" --tries="$tries" -O- "$url&start=$start" | htmlsplit
done | perl -ne 'use HTML::Entities; /^<a .*?href="(.*?)"/ and print decode_entities($1), "\n";' | grep '/imgres?' |
perl -ne 'use URI::Escape; ($img, $ref) = map { uri_unescape($_) } /imgurl=(.*?)&imgrefurl=(.*?)&/;
$ext = $img; for ($ext) { s,.*[/.],,; s/[^a-z0-9].*//i; $_ ||= "img"; }
$save = sprintf("%04d.$ext", ++$i); print join("\t", $save, $img, $ref), "\n";' |
tee -a .images.tsv |
while IFS=$'\t' read -r save img ref; do
wget -U"$agent2" -T"$timeout" --tries="$tries" --referer="$ref" -O "$save" "$img" || rm "$save" &
procs=$[$procs + 1]; [ $procs = $parallel ] && { wait; procs=0; }
done ; wait

特点:


  • 1500字节以下

  • 解释用法,如果没有args运行

  • 并行下载完整图像

  • 安全搜索选项

  • 图片大小,类型等选项字符串

  • 超时/重试选项

  • 模仿googlebot获取所有图片

  • 数字图片文件

  • 保存元数据

  • under 1500 bytes
  • explains usage, if run with no args
  • downloads full images in parallel
  • safe search option
  • image size, type, etc. opts string
  • timeout / retries options
  • impersonates googlebot to fetch all images
  • numbers image files
  • saves metadata

我会在一段时间内发布一个模块化版本,以表明它可以通过一组shell脚本和简单的工具很好地完成。

I'll post a modular version some time, to show that it can be done quite nicely with a set of shell scripts and simple tools.

这篇关于从谷歌搜索下载前1000个图像的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆