PipelineData - output_mode =" upload" [英] PipelineData - output_mode="upload"

查看:68
本文介绍了PipelineData - output_mode =" upload"的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

大家好,

我正在尝试定义一个Pipeline,以便将我的python脚本的中间结果上传到我的计算目标上的特定路径,以便我可以阅读之后的那些价值观。为此,我使用参数output_mode =" upload"定义PipelineData对象。
设定。但是,在我的管道步骤结束时,我总是得到异常:

I am trying to define a Pipeline such that intermediate results of my python scripts are uploaded to specific path on my compute target, so that I can read those values afterwards. For that I am defining PipelineData objects with parameter output_mode="upload" set. However, at the end of my pipeline step I always get exception:

AttributeError: 'DataStores' object has no attribute 'data_references'

我快速通过官方文档,但我找不到"上传"的例子。用于管道。在教程中,所有示例都使用"mount"和"mount"。作为定义PipelineData对象时的默认选项。

I went quickly through the official documentation, but I could not find an example where "upload" is used in Pipelines. In tutorials, all examples use "mount" as a default option when defining PipelineData objects.

有人遇到同样的问题吗?

Is someone encountered the same problem?

提前谢谢!

援助

推荐答案

您好,

output_path_on_compute是否是
PipelineData
对象? 

Is the output_path_on_compute a valid path used in the PipelineData object? 

根据错误看起来它是生成的,因为数据引用可能不正确。您能否检查数据存储设置和数据引用配置是否正确?这是

示例
 此设置。  

Based on the error it looks like it is generated because data reference might not be correct. Could you please check if the datastore setup and data reference configuration is correct? Here is an example of this setup.  


这篇关于PipelineData - output_mode =" upload"的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
相关文章
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆