将我的数据加载到numpy genfromtxt中会出错 [英] loading my data in numpy genfromtxt get errors

查看:118
本文介绍了将我的数据加载到numpy genfromtxt中会出错的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我的数据文件包含7500行,并带有:

I have my data file contain 7500 lines with :

Y1C 1.53    -0.06   0.58    0.52    0.42    0.16    0.79        -0.6    -0.3    
-0.78   -0.14   0.38    0.34    0.23    0.26    -1.8    -0.1    -0.17   0.3 
0.6 0.9 0.71    0.5 0.49    1.06    0.25    0.96    -0.39   0.24    0.69    
0.41    0.7 -0.16   -0.39   0.6 1.04    0.4 -0.04   0.36    0.23    -0.14   
-0.09   0.15    -0.46   -0.05   0.32    -0.54   -0.28   -0.15   1.34    0.29    
0.59    -0.43   -0.55   -0.18   -0.01   0.68        -0.06   -0.11   -0.67                   
-0.25   -0.34   -0.38   0.02    -0.21   0.12    0.01    0.07    0.15    0.14                
0.15    -0.11   0.07    -0.41   -0.2    0.24    0.06    0.12    0.12    0.11    
0.1 0.24    -0.71   0.22    -0.02   0.15    0.84    1.39    0.13    0.48    
0.19    -0.23   -0.12   0.33    0.37    0.18    0.06    0.32    0.09    
-0.09   0.02    -0.01   -0.06   -0.23   0.52    0.14    0.24    -0.05   0.37    
0.1 0.45    0.38    1.34    0.74    0.5 0.92    0.91    1.34    1.78    2.26    
0.05    0.29    0.53    0.17    0.41    0.47    0.47    1.21    0.87    0.68    
1.08    0.89    0.13    0.5 0.57    -0.5    -0.78   -0.34   -0.3    0.54    
0.31    0.64    1.23    0.335   0.36    -0.65   0.39    0.39    0.31    0.73    
0.54    0.3 0.26    0.47    0.13    0.24    -0.6    0.02    0.11    0.27    
0.21    -0.3    -1  -0.44   -0.15   -0.51   0.3 0.14    -0.15   -0.27   -0.27

Y2W -0.01   -0.3    0.23    0.01    -0.15   0.45    -0.04   0.14    -1.16   
-0.14   -0.56   -0.13   0.77    0.77    -0.57   0.48    0.22    -0.08   
-1.81   -0.46   -0.17   0.2 -0.18   -0.45   -0.4    1.35    0.81    1.21    
0.52    0.02    -0.06   0.37    0   -0.38   -0.02   0.48    0   0.58    0.81    
0.54    0.18    -0.11   0.03    0.1 -0.38   0.17    0.37    -0.05   0.13    
-0.01   -0.17   0.36    0.22    0   -1.4    -0.67   -0.45   -0.62   -0.58   
-0.47   -0.86   -1.12   -0.43   0.1 0.06    -0.45   -0.14   0.68    -0.16      
0.14    0.14    0.18    0.14    0.17    0.13    0.07    0.05    0.04    0.07    
-0.01   0.03    0.05    0.02    0.12    0.34    -0.04   -0.75   1.68    0.23    
0.49    0.38    -0.57   0.17    -0.04   0.19    0.22    0.29    -0.04   -0.3    
0.18    0.04    0.3 -0.06   -0.07   -0.21   -0.01   0.51    -0.04   -0.04   
-0.23   0.06    0.9 -0.14   0.19    2.5 2.84    3.27    2.13    2.5 2.66    
4.16    3.52    -0.12   0.13    0.44    0.32    0.44    0.46    0.7 0.68    
0.99    0.83    0.74    0.51    0.33    0.22    0.01    0.33    -0.19   0.4 
0.41    0.07    0.18    -0.01   0.45    -0.37   -0.49   1.02    -0.59   
-1.44   -1.53   -0.74   -1.48   0.12    0.05    0.02    -0.1    0.57    
-0.36   0.1 -0.16   -0.23   -0.34   -0.61   -0.37   -0.14   -0.22   -0.27   
-0.08   -0.08   -0.17   0.18    -0.74

Y3W 0.15    -0.07   -0.25   -0.3    -1.12   -0.67   -0.15   -0.43   0.63    
0.92    0.25    0.33    0.81    -0.12   -0.12   0.67    0.86    0.86        
1.54    -0.3    0   -0.29   -0.74   0.15    0.59    0.15    0.34    0.23    
0.5 0.52    0.25    0.86    0.53    0.51    0.25        -1.29   -1.79           
-0.45   -0.64   0.01    -0.58   -0.51   -0.74   -1.32               -0.47       
-0.81   0.55    -0.09   0.46    -0.3    -0.2    -0.81   -1.56   -2.74   1.03    
1   1.01    0.29    -0.64   -1.03   0.07    0.46    0.33    0.04    -0.6    
-0.64   -0.51   -0.36   -0.1    0.13    -1.4    -1.17   -0.64   -0.16   -0.5    
-0.47   0.75    0.62    0.7 1.06    0.93    0.56    -2.25   -0.94   -0.09   
0.08    -0.15   -1.6    -1.43   -0.84   -0.25   -1.22   -0.92   -1.22   
-0.97   -0.84   -0.89   0.24    0   -0.04   -0.64   -0.94   -1.56   -2.32   
0.63    -0.17   -3.06   -2.4    -2  -1.4    -0.81   -1.6    -3.06   -1.79   
0.17    0.28    -0.67   -2.82   -1.47   -1.82   -1.69   -1.38   -1.96   
-1.88   -2.34   -3.06   -0.18   0.5 -0.03   -0.49   -0.61   -0.54   -0.37   
0.1 -0.92   -1.79   -0.03   -0.54   0.94    -1  0.15    0.95    0.55    
-0.36   0.4 -0.73   0.85    -0.26   0.55    0.14    -0.36   0.38    0.87    
0.62    0.66    0.79    -0.67   0.48    0.62    0.48    0.72    0.73    0.29    
-0.3    -0.81

Y4W 0.24    0.76    0.2 0.34    0.11    0.07    0.01    0.36    0.4 -0.25   
-0.45   0.09    -0.97   0.19    0.28    -1.81   -0.64   -0.49   -1.27   
-0.95   -0.1    0.12    -0.1    0   -0.08   0.77    1.02    0.92    0.56    
0.1 0.7 0.57    0.16    1.29    0.82    0.55    0.5 1.83    1.79    0.01    
0.24    -0.67   -0.85   -0.42   -0.37   0.2 0.07    -0.01   -0.17   -0.2    
-0.43   -0.34   0.12    -0.21   -0.23   -0.22   -0.1    -0.07   -0.61   
-0.14   -0.43   -0.97   0.27    0.7 0.54    0.11    -0.5    -0.39   0.01    
0.61    0.88    1   0.35    0.67    0.6 0.78    0.46    0.09    -0.06   
-0.35   0.08    -0.14   -0.32   -0.11   0   0.01    0.02    0.77    0.18    
0.36    -1.15   -0.42   -0.19   0.06    -0.25   -0.81   -0.63   -1.43   
-0.85   -0.88   -0.68   -0.59   -1.01   -0.68   -0.71   0.15    0.08    0.08    
-0.03   -0.2    0.03    -0.18   -0.01   -0.08   -1.61   -0.67   -0.74   
-0.54   -0.8    -1.02   -0.84   -1.91   -0.22   -0.02   0.05    -0.84   
-0.65   -0.82   -0.4    -0.83   -0.9    -1.04   -1.23   -0.91   0.28    0.68    
0.57    -0.02   0.4 -1.52   0.17    0.44    -1.18   0.04    0.17    0.16    
0.04    -0.26   0.04    0.1 -0.11   -0.64   -0.09   -0.16   0.16    -0.05   
0.39    0.39    -0.06   0.46    0.2 -0.45   -0.35   -1.3    -0.26   -0.29   
0.02    0.16    0.18    -0.35   -0.45   -1.04   -0.69

Y5C 2.85    3.34                            -1  -0.47   -0.66   -0.03   1.41    
0.8 0   0.41    -0.14   -0.86   -0.79   -1.69       0           0   1.52    
1.29    0.84    0.58    1.02    1.35    0.45    1.02    1.47    0.82    0.46    
0.25    0.77    0.93            -0.58   -0.67   -0.18   -0.56   -0.01   0.25    
-0.71   -0.49           -0.43   0   -1.06   0.44    -0.29   0.26    -0.04   
-0.14   -0.1    -0.12   -1.6    0.33    0.62    0.52    0.7 -0.22   0.44    
-0.6    0.86    1.19    1.58    0.93    1   0.85    1.24    1.06    0.49    
0.26    0.18    0.3 -0.09   -0.42   0.05    0.54    0.24    0.37    0.86    
0.9 0.49    -1.47   -0.2    -0.43   0.2 0.1 -0.81   -0.74   -1.36   -0.97   
-0.94   -0.86   -1.56   -1.89   -1.89   -1.06   0.12    0.06    0.04    
-0.01   -0.12   0.01    -0.15   0.76    0.89    0.71    -1.12   0.03    
-0.86   0.26    0   -0.25   -0.06   0.19    0.41    0.58    -0.46   0.01    
-0.15   0.04    -1.01   -0.57   -0.71   -0.3    -1.01       1.83    0.59        
1.04    -1.43   0.38    0.65    -6.64   -0.42   0.24    0.46    0.96    0.24    
0.7 1.21    0.6 0.12    0.77    -0.03   0.53    0.31    0.46    0.51    
-0.45   0.23    0.32    -0.34   -0.1    0.1 -0.45   0.74    -0.06   0.21    
0.29    0.45    0.68    0.29    0.45

Y7C -0.22   -0.12   -0.29   -0.51   -0.81   -0.47   0.28    -0.1    0.15    
0.38    0.18    -0.27   0.12    -0.15   0.43    0.25    0.19    0.33    0.67    
0.86    -0.56   -0.29   -0.36   -0.42   0.08    0.04    -0.04   0.15    0.38    
-0.07   -0.1    -0.2    -0.03   -0.29   0.06    0.65    0.58    0.86    2.05    
0.3 0.33    -0.29   -0.23   -0.15   -0.32   0.08    0.34    0.15    0   
-0.01   0.28    0.36    0.25    0.46    0.4     0.7 0.49    0.97    1.04    
0.36    -0.47   -0.29   0.77    0.57    0.45    0.77    0.24    -0.23   0.12    
0.49    0.62    0.49    0.84    0.89    1.08    0.87    -0.18   -0.43   
-0.39   -0.18   -0.02   0.01    0.2 -0.2    -0.03   0.01    0.25    0.1 
-0.07   -1.43   -0.2    -0.4    0.32    0.72    -0.42   -0.3    -0.38   
-0.22   -0.81   -1.15   -1.6    -1.89   -2.06   -2.4    0.08    0.34    0.1 
-0.15   -0.06   -0.17   -0.47   -0.4    0.15    -1.22   -1.43   -1.03   
-1.03   -1.64   -1.84   -2.64   -2  0.05    0.4 0.88    -1.54   -1.21   
-1.46   -1.92   -1.52   -1.92   -1.7    -1.94   -1.86   -0.1    -0.02   
-0.22   -0.34   -0.48   0.28    0   0.14    0.4 -0.29   -0.27   -0.3    
-0.67   -0.09   0.23    0.33    0.23    0.1 0.38    -0.51   0.23    -0.73   
0.22    -0.47   0.24    0.68    0.53    0.23    -0.1    0.11    -0.18   0.16    
0.68    0.55    0.28    -0.03   0.03    0.08    0.12

缺少一个值,我想将其加载为我使用的矩阵:

There is a missing value, I wanted to load it as matrix I used :

data = np.genfromtxt("This_data.txt", delimiter='\t', missing_values=np.nan)

当我打印数据时,我得到:

When I print data I get :

Traceback (most recent call last):
  File "matrix.py", line 8, in <module>
    data = np.genfromtxt("This_data.txt", delimiter='\t', missing_values=np.nan ,usecols=np.arange(0,174))
  File "/home/anaconda2/lib/python2.7/numpy/lib/npyio.py", line 1769, in genfromtxt
    raise ValueError(errmsg)
ValueError: Some errors were detected !
    Line #25 (got 172 columns instead of 174)

我以前放过:

data = np.genfromtxt("This_data.txt", delimiter='\t', missing_values=np.nan ,usecols=np.arange(0,174))

但是我有同样的错误.有什么建议吗?

But I have same errors. Any suggestion?

推荐答案

文件的简短示例字节串替代:

A short sample bytestring substitute for a file:

In [168]: txt = b"""Y7C\t-0.22\t-0.12\t-0.29\t-0.51\t-0.81 
     ...: Y7C\t-0.22\t-0.12\t-0.29\t-0.51\t-0.81 
     ...: Y7C\t-0.22\t-0.12\t-0.29\t-0.51\t-0.81  
     ...: """

使用正确的定界符的最小负载.请注意,第一列是nan,因为它无法将字符串转换为浮点数.

Minimal load with correct delimiter. Note the first column is nan, because it can't convert the strings to float.

In [169]: np.genfromtxt(txt.splitlines(),delimiter='\t')
Out[169]: 
array([[  nan, -0.22, -0.12, -0.29, -0.51, -0.81],
       [  nan, -0.22, -0.12, -0.29, -0.51, -0.81],
       [  nan, -0.22, -0.12, -0.29, -0.51, -0.81]])

with dtype = None会自动设置每列dtype,从而创建结构化数组:

with dtype=None it sets each column dtype automatically, creating a structured array:

In [170]: np.genfromtxt(txt.splitlines(),delimiter='\t',dtype=None)
Out[170]: 
array([(b'Y7C', -0.22, -0.12, -0.29, -0.51, -0.81),
       (b'Y7C', -0.22, -0.12, -0.29, -0.51, -0.81),
       (b'Y7C', -0.22, -0.12, -0.29, -0.51, -0.81)], 
      dtype=[('f0', 'S3'), ('f1', '<f8'), ('f2', '<f8'), ('f3', '<f8'), ('f4', '<f8'), ('f5', '<f8')])

说明要使用的列,跳过第一列:

Spell out the columns to use, skipping the first:

In [172]: np.genfromtxt(txt.splitlines(),delimiter='\t',usecols=np.arange(1,6))
Out[172]: 
array([[-0.22, -0.12, -0.29, -0.51, -0.81],
       [-0.22, -0.12, -0.29, -0.51, -0.81],
       [-0.22, -0.12, -0.29, -0.51, -0.81]])

但是,如果我要求找到更多列,则会收到错误消息,例如您的错误消息:

But if I ask for more columns that it finds I get an error, like yours:

In [173]: np.genfromtxt(txt.splitlines(),delimiter='\t',usecols=np.arange(1,7))
---------------------------------------------------------------------------
.... 
ValueError: Some errors were detected !
    Line #1 (got 6 columns instead of 6)
    Line #2 (got 6 columns instead of 6)
    Line #3 (got 6 columns instead of 6)

您的missing_values参数没有帮助;那是错误的用途

Your missing_values parameters doesn't help; that's the wrong use for that

这是missing_values的正确用法-检测字符串值并将其替换为有效的浮点值:

This is the correct use of missing_values - to detect the string value and replace it with a valid float value:

In [177]: np.genfromtxt(txt.splitlines(),delimiter='\t',missing_values='Y7C',filling_val
     ...: ues=0)
Out[177]: 
array([[ 0.  , -0.22, -0.12, -0.29, -0.51, -0.81],
       [ 0.  , -0.22, -0.12, -0.29, -0.51, -0.81],
       [ 0.  , -0.22, -0.12, -0.29, -0.51, -0.81]])

如果文件中有足够的定界符,则可以将其视为缺失值

If the file has sufficient delimiters, it can treat those as missing values

In [178]: txt = b"""Y7C\t-0.22\t-0.12\t-0.29\t-0.51\t-0.81\t\t 
     ...: Y7C\t-0.22\t-0.12\t-0.29\t-0.51\t-0.81\t\t 
     ...: Y7C\t-0.22\t-0.12\t-0.29\t-0.51\t-0.81\t\t  
     ...: """
In [179]: np.genfromtxt(txt.splitlines(),delimiter='\t')
Out[179]: 
array([[  nan, -0.22, -0.12, -0.29, -0.51, -0.81,   nan,   nan],
       [  nan, -0.22, -0.12, -0.29, -0.51, -0.81,   nan,   nan],
       [  nan, -0.22, -0.12, -0.29, -0.51, -0.81,   nan,   nan]])
In [180]: np.genfromtxt(txt.splitlines(),delimiter='\t',filling_values=0)
Out[180]: 
array([[ 0.  , -0.22, -0.12, -0.29, -0.51, -0.81,  0.  ,  0.  ],
       [ 0.  , -0.22, -0.12, -0.29, -0.51, -0.81,  0.  ,  0.  ],
       [ 0.  , -0.22, -0.12, -0.29, -0.51, -0.81,  0.  ,  0.  ]])

我相信pandas csv阅读器可以更好地处理粗糙"列和缺少值.

I believe the pandas csv reader can handle 'ragged' columns and missing values better.

这篇关于将我的数据加载到numpy genfromtxt中会出错的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆