pandas to_sql 所有列都为 nvarchar [英] pandas to_sql all columns as nvarchar

查看:17
本文介绍了pandas to_sql 所有列都为 nvarchar的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我有一个 Pandas 数据框,它是动态创建的,列名各不相同.我正在尝试将它们推送到 sql,但不希望它们将 mssqlserver 作为默认数据类型文本"(谁能解释为什么这是默认值?使用更常见的数据类型是否有意义?)

I have a pandas dataframe that is dynamically created with columns names that vary. I'm trying to push them to sql, but don't want them to go to mssqlserver as the default datatype "text" (can anyone explain why this is the default? Wouldn't it make sense to use a more common datatype?)

有谁知道如何为所有列指定数据类型?

Does anyone know how I can specify a datatype for all columns?

column_errors.to_sql('load_errors',push_conn, if_exists = 'append', index = False, dtype = #Data type for all columns#)

dtype 参数采用 dict,因为我不知道列是什么,所以很难将它们全部设置为 'sqlalchemy.types.NVARCHAR'

the dtype argument takes a dict, and since I don't know what the columns will be it is hard to set them all to be 'sqlalchemy.types.NVARCHAR'

这就是我想做的:

column_errors.to_sql('load_errors',push_conn, if_exists = 'append', index = False, dtype = 'sqlalchemy.types.NVARCHAR')

对如何最好地指定所有列类型的任何帮助/理解将不胜感激!

Any help/understanding of how best to specify all column types would be much appreciated!

推荐答案

如果你事先不知道列名,你可以动态创建这个字典:

You can create this dict dynamically if you do not know the column names in advance:

from sqlalchemy.types import NVARCHAR
df.to_sql(...., dtype={col_name: NVARCHAR for col_name in df})

请注意,您必须传递 sqlalchemy 类型对象本身(或一个实例来指定像 NVARCHAR(length=10) 之类的参数)和 not 字符串,就像在您的示例.

Note that you have to pass the sqlalchemy type object itself (or an instance to specify parameters like NVARCHAR(length=10)) and not a string as in your example.

这篇关于pandas to_sql 所有列都为 nvarchar的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆