Google Cloud Data Fusion-从REST API端点源构建管道 [英] Google Cloud Data Fusion -- building pipeline from REST API endpoint source

查看:112
本文介绍了Google Cloud Data Fusion-从REST API端点源构建管道的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

尝试构建管道以从第三方REST API端点数据源读取数据.

Attempting to build a pipeline to read from a 3rd party REST API endpoint data source.

我正在使用集线器中的HTTP(版本1.2.0)插件.

I am using the HTTP (version 1.2.0) plugin found in the Hub.

响应请求URL为: https://api.example.io/v2/somedata?return_count=false

响应正文样本:

{
  "paging": {
    "token": "12456789",
    "next": "https://api.example.io/v2/somedata?return_count=false&__paging_token=123456789"
  },
  "data": [
    {
      "cID": "aerrfaerrf",
      "first": true,
      "_id": "aerfaerrfaerrf",
      "action": "aerrfaerrf",
      "time": "1970-10-09T14:48:29+0000",
      "email": "example@aol.com"
    },
    {...}
  ]
}

日志中的主要错误是:

java.lang.NullPointerException: null
    at io.cdap.plugin.http.source.common.pagination.BaseHttpPaginationIterator.getNextPage(BaseHttpPaginationIterator.java:118) ~[1580429892615-0/:na]
    at io.cdap.plugin.http.source.common.pagination.BaseHttpPaginationIterator.ensurePageIterable(BaseHttpPaginationIterator.java:161) ~[1580429892615-0/:na]
    at io.cdap.plugin.http.source.common.pagination.BaseHttpPaginationIterator.hasNext(BaseHttpPaginationIterator.java:203) ~[1580429892615-0/:na]
    at io.cdap.plugin.http.source.batch.HttpRecordReader.nextKeyValue(HttpRecordReader.java:60) ~[1580429892615-0/:na]
    at io.cdap.cdap.etl.batch.preview.LimitingRecordReader.nextKeyValue(LimitingRecordReader.java:51) ~[cdap-etl-core-6.1.1.jar:na]
    at org.apache.spark.rdd.NewHadoopRDD$$anon$1.hasNext(NewHadoopRDD.scala:214) ~[spark-core_2.11-2.3.3.jar:2.3.3]
    at org.apache.spark.InterruptibleIterator.hasNext(InterruptibleIterator.scala:37) ~[spark-core_2.11-2.3.3.jar:2.3.3]
    at scala.collection.Iterator$$anon$12.hasNext(Iterator.scala:439) ~[scala-library-2.11.8.jar:na]
    at scala.collection.Iterator$$anon$12.hasNext(Iterator.scala:439) ~[scala-library-2.11.8.jar:na]
    at scala.collection.Iterator$$anon$12.hasNext(Iterator.scala:439) ~[scala-library-2.11.8.jar:na]
    at org.apache.spark.internal.io.SparkHadoopWriter$$anonfun$4.apply(SparkHadoopWriter.scala:128) ~[spark-core_2.11-2.3.3.jar:2.3.3]
    at org.apache.spark.internal.io.SparkHadoopWriter$$anonfun$4.apply(SparkHadoopWriter.scala:127) ~[spark-core_2.11-2.3.3.jar:2.3.3]
    at org.apache.spark.util.Utils$.tryWithSafeFinallyAndFailureCallbacks(Utils.scala:1415) ~[spark-core_2.11-2.3.3.jar:2.3.3]
    at org.apache.spark.internal.io.SparkHadoopWriter$.org$apache$spark$internal$io$SparkHadoopWriter$$executeTask(SparkHadoopWriter.scala:139) [spark-core_2.11-2.3.3.jar:2.3.3]
    at org.apache.spark.internal.io.SparkHadoopWriter$$anonfun$3.apply(SparkHadoopWriter.scala:83) [spark-core_2.11-2.3.3.jar:2.3.3]
    at org.apache.spark.internal.io.SparkHadoopWriter$$anonfun$3.apply(SparkHadoopWriter.scala:78) [spark-core_2.11-2.3.3.jar:2.3.3]
    at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:87) [spark-core_2.11-2.3.3.jar:2.3.3]
    at org.apache.spark.scheduler.Task.run(Task.scala:109) [spark-core_2.11-2.3.3.jar:2.3.3]
    at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:345) [spark-core_2.11-2.3.3.jar:2.3.3]
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) [na:1.8.0_232]
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) [na:1.8.0_232]
    at java.lang.Thread.run(Thread.java:748) [na:1.8.0_232]

可能的问题

尝试解决了一段时间后,我认为问题可能出在

Possible issues

After trying to troubleshoot this for awhile, I'm thinking the issue might be with

  • 数据融合HTTP插件具有很多处理分页的方法
    • 基于上述响应正文,看来分页类型的最佳选择是响应正文中的链接
    • 对于必需的下一页JSON/XML字段路径" 参数,我尝试了 $.paging.next paging/next .都不行.
    • 我已经验证了在Chrome中打开时/paging/next 中的链接是否可用
    • Data Fusion HTTP plugin has a lot of methods to deal with pagination
      • Based on the response body above, it seems like the best option for Pagination Type is Link in Response Body
      • For the required Next Page JSON/XML Field Path parameter, I've tried $.paging.next and paging/next. Neither work.
      • I have verified that the link in /paging/next works when opening in Chrome
      • 当仅尝试在Chrome中查看响应URL时,将弹出提示询问用户名和密码的提示
        • 只需输入用户名的API密钥即可通过Chrome中的此提示
        • 要在Data Fusion HTTP插件中执行此操作,请在基本身份验证部分
        • 中将API密钥用于用户名.
        • When simply trying to view the response URL in Chrome, a prompt will pop up asking for username and password
          • Only need to input API key for username to get past this prompt in Chrome
          • To do this in the Data Fusion HTTP plugin, the API Key is used for Username in the Basic Authentication section

          任何人都可以在Google Cloud Data Fusion中以数据源为REST API创建管道吗?

          Anyone have any success in creating a pipeline in Google Cloud Data Fusion where the data source is a REST API?

          推荐答案

          任何人都可以在Google Cloud Data Fusion中以数据源为REST API创建管道吗?

          Anyone have any success in creating a pipeline in Google Cloud Data Fusion where the data source is a REST API?

          这不是实现此最佳方法的最佳方法,即摄取数据服务API概述,以pub/sub的形式使用pub/sub作为您的管道的源,这将为其上的数据提供一个简单可靠的暂存位置,以进行处理,存储和分析,请参阅文档pub/sub API.为了与Dataflow结合使用,请遵循以下步骤,这些步骤在官方文档中在数据流中使用发布/订阅

          This is not the optimal way to achieve this the best way would be to ingest data Service APIs Overview to pub/sub your would then use pub/sub as the source for your pipeline this would provide a simple and reliable staging location for your data on its for processing, storage, and analysis, see the documentation for the pub/sub API . In order to use this in conjunction with Dataflow, the steps to follow are in the official documentation here Using Pub/Sub with Dataflow

          这篇关于Google Cloud Data Fusion-从REST API端点源构建管道的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆