参数的Azure数据工厂源数据集值 [英] Azure Data Factory Source Dataset value from Parameter
问题描述
我在Azure Datafactory中有一个由CSV文件支持的数据集.我在数据集中添加了另一列,并希望通过数据集参数传递它的值,但是值永远不会复制到该列中
I have a Dataset in Azure Datafactory backed by a CSV file. I added an additional column in Dataset and want to pass it's value from Dataset parameter but value never gets copied to the column
"type": "AzureBlob",
"structure":
[
{
"name": "MyField",
"type": "String"
}
]
我也有定义的参数
"parameters": {
"MyParameter": {
"type": "String",
"defaultValue": "ABC"
}
}
如何将参数值复制到列?我尝试了以下操作,但不起作用
How can copy the parameter value to Column? I tried following but doesn't work
"type": "AzureBlob",
"structure":
[
{
"name": "MyField",
"type": "String",
"value": "@dataset().MyParameter"
}
]
但这不起作用.尽管设置了参数值,但目的地却为NULL
But this does not work. I am getting NULL in destination although parameter value is set
推荐答案
Based on document: Expressions and functions in Azure Data Factory , @dataset().XXX
is not supported in Azure Data Factory so far. So, you can't use parameters value as custom column into sink or source with native copy activity directly.
但是,您可以采用以下解决方法:
However, you could adopt below workarounds:
1.You could create a custom activity and write code to do whatever you need.
2.您可以将csv
文件暂存在蔚蓝的数据湖中,然后执行U-SQL
脚本从文件中读取数据,并在新列中附加管道rundId
.然后将其输出到数据湖中的新区域,以便其余管道可以拾取数据.为此,您只需要将参数从ADF传递到U-SQL.请参考 U- SQL活动.
2.You could stage the csv
file in a azure data lake, then execute a U-SQL
script to read the data from the file and append the new column with the pipeline rundId
. Then output it to a new area in the data lake so that the data could be picked up by the rest of your pipeline. To do this, you just need to simply pass a Parameter to U-SQL from ADF. Please refer to the U-SQL Activity.
在此线程中:使用adf管道参数作为映射时汇入列的来源,客户使用第二种方法.
In this thread: use adf pipeline parameters as source to sink columns while mapping, the customer used the second way.
这篇关于参数的Azure数据工厂源数据集值的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!