pandas to_sql所有列均作为nvarchar [英] pandas to_sql all columns as nvarchar

查看:231
本文介绍了pandas to_sql所有列均作为nvarchar的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我有一个pandas数据框,它是动态创建的,其列名各不相同.我正在尝试将它们推送到sql,但不希望它们作为默认数据类型文本"进入mssqlserver(有人可以解释为什么这是默认值吗?使用更常见的数据类型有意义吗? )

I have a pandas dataframe that is dynamically created with columns names that vary. I'm trying to push them to sql, but don't want them to go to mssqlserver as the default datatype "text" (can anyone explain why this is the default? Wouldn't it make sense to use a more common datatype?)

有人知道如何为所有列指定数据类型吗?

Does anyone know how I can specify a datatype for all columns?

column_errors.to_sql('load_errors',push_conn, if_exists = 'append', index = False, dtype = #Data type for all columns#)

dtype参数采用dict,并且由于我不知道列将是什么,因此很难将它们全部设置为'sqlalchemy.types.NVARCHAR'

the dtype argument takes a dict, and since I don't know what the columns will be it is hard to set them all to be 'sqlalchemy.types.NVARCHAR'

这就是我想做的:

column_errors.to_sql('load_errors',push_conn, if_exists = 'append', index = False, dtype = 'sqlalchemy.types.NVARCHAR')

对于如何最好地指定所有列类型的任何帮助/理解,将不胜感激!

Any help/understanding of how best to specify all column types would be much appreciated!

推荐答案

如果您事先不知道列名,则可以动态创建此字典:

You can create this dict dynamically if you do not know the column names in advance:

from sqlalchemy.types import NVARCHAR
df.to_sql(...., dtype={col_name: NVARCHAR for col_name in df})

请注意,您必须传递sqlalchemy类型对象本身(或实例来指定诸如NVARCHAR(length=10)之类的参数),而传递的 not 不是字符串,如您的示例.

Note that you have to pass the sqlalchemy type object itself (or an instance to specify parameters like NVARCHAR(length=10)) and not a string as in your example.

这篇关于pandas to_sql所有列均作为nvarchar的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆