将多个curl请求的输出从Shell脚本追加到文件 [英] append output of multiple curl requests to a file from shell script

查看:1022
本文介绍了将多个curl请求的输出从Shell脚本追加到文件的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我正在尝试通过内部API获取JSON输出,并在cURL请求之间的参数值中添加100。我需要遍历,因为它将每个请求的最大结果数限制为100。有人告诉我增加,您应该可以得到所需的东西。

I'm trying to fetch the JSON output by an internal API and add 100 to a parameter value between cURL requests. I need to loop through because it restricts the maximum number of results per request to 100. I was told to "increment and you should be able to get what you need".

无论如何,这就是我写的内容:

Anyway, here's what I wrote:

#!/bin/bash

COUNTER=100
until [ COUNTER -gt 30000 ]; do
    curl -vs "http://example.com/locations/city?limit=100&offset=$COUNTER" >> cities.json
    let COUNTER=COUNTER+100
done

问题是我在终端中收到一堆奇怪的消息,并且我尝试重定向输出的文件仍然包含原始的100个对象。我觉得我很可能缺少明显的东西。有什么想法吗?我在until循环上确实使用了一些较旧的教程,所以也许是语法问题?

The problem is that I get a bunch of weird messages in the terminal and the file I'm trying to redirect the output too still contains it's original 100 objects. I feel like I'm probably missing something terrifically obvious. Any thoughts? I did use a somewhat old tutorial on the until loop, so maybe it's a syntax issue?

预先感谢您!

编辑:我不反对使用完全替代的方法,但是我希望这会有些简单。我认为经验不足是主要的限制因素。

I'm not opposed to a completely alternate method, but I had hoped this would be somewhat straightforward. I figured my lack of experience was the main limiter.

推荐答案

您可能会发现使用GNU可以更快,更轻松地完成此任务。平行:

You might find you can do this faster, and pretty easily with GNU Parallel:

parallel -k curl -vs "http://example.com/locations/city?limit=100\&offset={}" ::: $(seq 100 100 30000) > cities.json

这篇关于将多个curl请求的输出从Shell脚本追加到文件的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆