PySpark - 如何删除 csv 输出中的科学记数法 [英] PySpark - How to remove scientific notation in csv output

查看:181
本文介绍了PySpark - 如何删除 csv 输出中的科学记数法的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我有一个 spark 聚合,我想将结果输出到 csv,但我发现 spark 总是以科学记数法输出大量小数.我已经尝试了这个问题中提到的解决方案,但也没有奏效.

I have a spark aggregation that I'd like to output a result to csv, but I'm finding that spark always outputs a large number of decimals in scientific notation. I've tried the solution mentioned in this question, but that has not worked either.

预期输出:

foo,avg(bar)
a,0.0000002
b,0.0000001

实际输出:

foo,avg(bar)
a,2.0E-7
b,1.0E-7

请参见下面的示例:

from os import path
import shutil
import glob
from pyspark.sql import SQLContext, functions as F, types

def test(sc):
    sq = SQLContext(sc)
    data = [("a", 1e-7), ("b", 1e-7), ("a", 3e-7)]
    df = sq.createDataFrame(data, ['foo', 'bar'])

    # 12 digits with 9 decimal places
    decType = types.DecimalType(precision=12, scale=9)

    # Cast both the column input and column output to Decimal
    aggs = [F.mean(F.col("bar").cast(decType)).cast(decType)]

    groups = [F.col("foo")]
    result = df.groupBy(*groups).agg(*aggs)
    write(result)
    return df, aggs, groups, result

def write(result):
    tmpDir = path.join("res", "tmp")
    config = {"sep": ","}
    result.write.format("csv")\
        .options(**config)\
        .save(tmpDir)

    # Once the distributed portion is done, write out to a single a file
    allFiles = glob.glob(path.join(tmpDir,"*.csv"))

    fullOut = path.join("res", "final.csv")
    with open(fullOut, 'wb') as wfd:
        # First write out the header row
        header = config.get("sep", ',').join(result.columns)
        wfd.write(header + "\n")
        for f in allFiles:
            with open(f, 'rb') as fd:
                shutil.copyfileobj(fd, wfd)
                pass
            pass
    shutil.rmtree(tmpDir)
    return

在 pyspark shell 中:

In a pyspark shell:

import spark_test as t
t.test(sc)

推荐答案

>>> df1 = spark.createDataFrame([('a','2.0e-7'),('b','1e-5'),('c','1.0e-7')],['foo','avg'])
>>> df1.show()
+---+------+
|foo|   avg|
+---+------+
|  a|2.0e-7|
|  b|  1e-5|
|  c|1.0e-7|
+---+------+

>>> df1.select('foo','avg',format_string('%.7f',df1.avg.cast('float')).alias('converted')).show()
+---+------+---------+
|foo|   avg|converted|
+---+------+---------+
|  a|2.0e-7|0.0000002|
|  b|  1e-5|0.0000100|
|  c|1.0e-7|0.0000001|
+---+------+---------+

这篇关于PySpark - 如何删除 csv 输出中的科学记数法的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆