与 RDD 和 DataFrame 不同的浮点精度 [英] Different floating point precision from RDD and DataFrame

查看:49
本文介绍了与 RDD 和 DataFrame 不同的浮点精度的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我将 RDD 更改为 DataFrame 并将结果与​​我使用 read.csv 导入的另一个 DataFrame 进行比较,但两种方法的浮点精度不同.感谢您的帮助.

我使用的数据来自

解决方案

忽略转换中的精度损失是不一样的.

  • Python

    根据 Python 的浮点运算:问题和限制标准实现使用64 位表示:

    <块引用>

    今天(2000 年 11 月)几乎所有机器都使用 IEEE-754 浮点运算,并且几乎所有平台都将 Python 浮点数映射到 IEEE-754双精度".754 个双精度包含 53 位精度,

  • Spark SQL

    Spark SQL FloatType使用 32 位表示:

    <块引用>

    FloatType:表示 4 字节单精度浮点数.

使用 DoubleType 可能更接近:

<块引用>

DoubleType:表示 8 字节双精度浮点数.

但如果可预测的行为很重要,您应该使用具有明确定义的精度的 DecimalTypes.

I changed an RDD to DataFrame and compared the results with another DataFrame which I imported using read.csv but the floating point precision is not the same from the two approaches. I appreciate your help.

The data I am using is from here.

from pyspark.sql import Row
from pyspark.sql.types import *

RDD way

orders = sc.textFile("retail_db/orders")
order_items = sc.textFile('retail_db/order_items')
orders_comp = orders.filter(lambda line: ((line.split(',')[-1] == 'CLOSED') or  (line.split(',')[-1] == 'COMPLETE')))
orders_compMap = orders_comp.map(lambda line: (int(line.split(',')[0]), line.split(',')[1]))

order_itemsMap = order_items.map(lambda line: (int(line.split(',')[1]), 
                                           (int(line.split(',')[2]), float(line.split(',')[4])) ))

 joined = orders_compMap.join(order_itemsMap)
 joined2 = joined.map(lambda line: ((line[1][0], line[1][1][0]), line[1][1][1]))

joined3 = joined2.reduceByKey(lambda a, b : a +b).sortByKey()

df1 = joined3.map(lambda x:Row(date = x[0][0], product_id = x[0][1], total  = x[1])).toDF().select(['date','product_id', 'total'])

DataFrame

 schema = StructType([StructField('order_id', IntegerType(), True),
                StructField('date', StringType(), True),
                StructField('customer_id', StringType(), True),
                 StructField('status', StringType(), True)])


 orders2 = spark.read.csv("retail_db/orders",schema = schema)


 schema = StructType([StructField('item_id', IntegerType(), True),
                StructField('order_id', IntegerType(), True),
                StructField('product_id', IntegerType(), True),
                 StructField('quantity', StringType(), True),
                 StructField('sub_total', FloatType(), True),
                 StructField('product_price', FloatType(), True)])



orders_items2 = spark.read.csv("retail_db/order_items", schema = schema)

orders2.registerTempTable("orders2t")
orders_items2.registerTempTable("orders_items2t")

 df2 = spark.sql('select o.date, oi.product_id, sum(oi.sub_total)  \
      as total from  orders2t as o inner join orders_items2t as oi on 
      o.order_id = oi.order_id \
      where o.status in ("CLOSED", "COMPLETE") group by o.date, 
     oi.product_id order by  o.date, oi.product_id')

Are they the same?

df1.registerTempTable("df1t")
df2.registerTempTable("df2t")

 spark.sql("select d1.total - d2.total as difference from df1t as d1 inner 
 join df2t as d2 on d1.date = d2.date \
 and d1.product_id =d2.product_id ").show(truncate = False)

解决方案

Ignoring loss of precision in conversions there are not the same.

Using DoubleType might be closer:

DoubleType: Represents 8-byte double-precision floating point numbers.

but if predictable behavior is important you should use DecimalTypes with well defined precision.

这篇关于与 RDD 和 DataFrame 不同的浮点精度的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆