如何将Azure机器学习批量评分结果写入Data Lake? [英] How to write Azure machine learning batch scoring results to data lake?

查看:53
本文介绍了如何将Azure机器学习批量评分结果写入Data Lake?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我正在尝试将批评分的输出写入datalake:

I'm trying to write the output of batch scoring into datalake:

    parallel_step_name = "batchscoring-" + datetime.now().strftime("%Y%m%d%H%M")
    
    output_dir = PipelineData(name="scores", 
                              datastore=def_ADL_store,
                              output_mode="upload",
                              output_path_on_compute="path in data lake")

parallel_run_config = ParallelRunConfig(
    environment=curated_environment,
    entry_script="use_model.py",
    source_directory="./",
    output_action="append_row",
    mini_batch_size="20",
    error_threshold=1,
    compute_target=compute_target,
    process_count_per_node=2,
    node_count=2
)
    
    batch_score_step = ParallelRunStep(
        name=parallel_step_name,
        inputs=[test_data.as_named_input("test_data")],
        output=output_dir,
        parallel_run_config=parallel_run_config,
        allow_reuse=False
    )

但是我遇到错误:"code":"UserError",:用户程序因异常而失败:缺少参数--output或它的值为空."

However I meet the error: "code": "UserError", "message": "User program failed with Exception: Missing argument --output or its value is empty."

如何将批处理分数的结果写入数据湖?

How can I write results of batch score to data lake?

推荐答案

我认为 PipelineData 不支持ADLS.我的建议是将工作区的默认Blob存储用于 PipelineData ,然后在 ParallelRunStep 完成后使用 DataTransferStep .

I don’t think ADLS is supported for PipelineData. My suggestion is to use the workspace’s default blob store for the PipelineData, then use a DataTransferStep for after the ParallelRunStep is completed.

这篇关于如何将Azure机器学习批量评分结果写入Data Lake?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆