数据流错误-无法转换数据源 [英] Dataflow error - Data source cannot be converted
本文介绍了数据流错误-无法转换数据源的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!
问题描述
我们有一个数据流在过去30天运行良好,但昨晚失败了。管道中的错误消息为:
{
"StatusCode": "DFExecutorUserError",
"Message": "Job failed due to reason: The given value of type NVARCHAR(255) from the data source cannot be converted to type nvarchar(100) of the specified target column.",
"Details": "shaded.msdataflow.com.microsoft.sqlserver.jdbc.SQLServerException: The given value of type NVARCHAR(255) from the data source cannot be converted to type nvarchar(100) of the specified target column.
at shaded.msdataflow.com.microsoft.sqlserver.jdbc.SQLServerBulkCopy.validateStringBinaryLengths(SQLServerBulkCopy.java:1690)
at shaded.msdataflow.com.microsoft.sqlserver.jdbc.SQLServerBulkCopy.writeColumn(SQLServerBulkCopy.java:3006)
at shaded.msdataflow.com.microsoft.sqlserver.jdbc.SQLServerBulkCopy.writeBatchData(SQLServerBulkCopy.java:3647)
at shaded.msdataflow.com.microsoft.sqlserver.jdbc.SQLServerBulkCopy.doInsertBulk(SQLServerBulkCopy.java:1566)
at shaded.msdataflow.com.microsoft.sqlserver.jdbc.SQLServerBulkCopy.access$300(SQLServerBulkCopy.java:65)
at shaded.msdataflow.com.microsoft.sqlserver.jdbc.SQLServerBulkCopy$1InsertBulk.doExecute(SQLServerBulkCopy.java:663)
at shaded.msdataflow.com.microsoft.sqlserver.jdbc.TDSCommand.execute(IOBuffer.java:7418)
at shaded.msdataflow.com.microsoft.sqlserver.jd"
}
错误消息不是非常精确,它没有列出不符合条件的列。这会使故障排除过程更加困难。
目标表和源表都托管在同一Azure SQL数据库中。目标表包含两个长度为100的列。源字段的最大长度为varchar(280)和varchar(290)。这些列中数据的实际最大长度为30和40个字符。--Source table
CREATE TABLE [pq].[USRS](
[USRS_ID] [varchar](100) NULL,
[USRS_FULL_NAME] [varchar](280) NULL,
/* STRIPPED NON-RELEVANT COLUMNS */
[USRS_EMAIL_ADDRESS] [varchar](290) NULL
/* STRIPPED NON-RELEVANT COLUMNS */
) ON [PRIMARY]
GO
--Destination table
CREATE TABLE [fin].[l20_dim_gebruiker](
[id] [int] IDENTITY(1,1) NOT NULL,
[gebruiker_id] [int] NOT NULL,
[volledige_naam] [varchar](100) NULL,
[emailadres] [varchar](100) NULL,
/* STRIPPED NON-RELEVANT COLUMNS */
CONSTRAINT [PK__l20_dim___3213E83F139E41DB] PRIMARY KEY CLUSTERED
(
[id] ASC
)WITH (STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, OPTIMIZE_FOR_SEQUENTIAL_KEY = OFF) ON [PRIMARY]
) ON [PRIMARY]
GO
导致此错误消息的原因可能是什么?我们如何才能使数据流转换更可靠?
推荐答案
好,所以错误在我这边。我对源表所做的检查并不完全确定。
Select max(len(USRS_FULL_NAME))
, max(datalength(USRS_FULL_NAME))
From pq.USRS
以上查询的结果:
+--------------------+---------------------------+
| USRS_FULL_NAME_LEN | USRS_FULL_NAME_DATALENGTH |
+--------------------+---------------------------+
| 30 | 250 |
+--------------------+---------------------------+
T-SQL中的LEN(GTH)函数不计算尾随空格,请参阅MSFT Docs或this workaround on SO。
我现在的修复方法是使用DerivedColum修饰符裁剪nvarchar字段。
这篇关于数据流错误-无法转换数据源的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!
查看全文