将元数据附加到 Spark 中的向量列 [英] Attach metadata to vector column in Spark

查看:22
本文介绍了将元数据附加到 Spark 中的向量列的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

背景:我有一个包含两列的数据框:标签和特征.

Context: I have a data frame with two columns: label, and features.

org.apache.spark.sql.DataFrame = [label: int, features: vector]

其中 features 是使用 VectorAssembler 构建的数字类型的 mllib.linalg.VectorUDT.

Where features is a mllib.linalg.VectorUDT of numeric type built using VectorAssembler.

问题:有没有办法为特征向量分配模式?我想跟踪每个功能的名称.

Question: Is there a way to assign a schema to the features vector? I want to keep track of the name of each feature.

目前尝试过:

val defaultAttr = NumericAttribute.defaultAttr
val attrs = Array("feat1", "feat2", "feat3").map(defaultAttr.withName)
val attrGroup = new AttributeGroup("userFeatures", attrs.asInstanceOf[Array[Attribute]])

<小时>

scala> attrGroup.toMetadata 
res197: org.apache.spark.sql.types.Metadata = {"ml_attr":{"attrs":{"numeric":[{"idx":0,"name":"f1"},{"idx":1,"name":"f2"},{"idx":2,"name":"f3"}]},"num_attrs":3}}

但不确定如何将其应用于现有数据框.

But was not sure how to apply this to an existing data frame.

推荐答案

至少有两个选择:

  1. 在现有的 DataFrame 上,您可以使用带有 metadata 参数的 as 方法:

  1. On existing DataFrame you can use as method with metadata argument:

import org.apache.spark.ml.attribute._

val rdd = sc.parallelize(Seq(
  (1, Vectors.dense(1.0, 2.0, 3.0))
))
val df = rdd.toDF("label", "features")

df.withColumn("features", $"features".as("_", attrGroup.toMetadata))

  • 当您创建新的 DataFrame 时,转换 AttributeGroup toStructField 并将其用作给定列的架构:

  • When you create new DataFrame convert AttributeGroup toStructField and use it as a schema for a given column:

    import org.apache.spark.sql.types.{StructType, StructField, IntegerType}
    
    val schema = StructType(Array(
      StructField("label", IntegerType, false),
      attrGroup.toStructField()
    ))
    
    spark.createDataFrame(
      rdd.map(row => Row.fromSeq(row.productIterator.toSeq)),
      schema)
    

  • 如果使用 VectorAssembler 创建了向量列,则应该已经附加了描述父列的列元数据.

    If vector column has been created using VectorAssembler column metadata describing parent columns should be already attached.

    import org.apache.spark.ml.feature.VectorAssembler
    
    val raw = sc.parallelize(Seq(
      (1, 1.0, 2.0, 3.0)
    )).toDF("id", "feat1", "feat2", "feat3")
    
    val assembler = new VectorAssembler()
      .setInputCols(Array("feat1", "feat2", "feat3"))
      .setOutputCol("features")
    
    val dfWithMeta = assembler.transform(raw).select($"id", $"features")
    dfWithMeta.schema.fields(1).metadata
    
    // org.apache.spark.sql.types.Metadata = {"ml_attr":{"attrs":{"numeric":[
    //   {"idx":0,"name":"feat1"},{"idx":1,"name":"feat2"},
    //   {"idx":2,"name":"feat3"}]},"num_attrs":3}
    

    矢量字段不能使用点语法直接访问(如 $features.feat1),但可以由诸如 VectorSlicer 之类的专门工具使用:

    Vector fields are not directly accessible using dot syntax (like $features.feat1) but can used by specialized tools like VectorSlicer:

    import org.apache.spark.ml.feature.VectorSlicer
    
    val slicer = new VectorSlicer()
      .setInputCol("features")
      .setOutputCol("featuresSubset")
      .setNames(Array("feat1", "feat3"))
    
    slicer.transform(dfWithMeta).show
    // +---+-------------+--------------+
    // | id|     features|featuresSubset|
    // +---+-------------+--------------+
    // |  1|[1.0,2.0,3.0]|     [1.0,3.0]|
    // +---+-------------+--------------+
    

    对于 PySpark,请参阅如何将列声明为 DataFrame 中的分类特征以用于 ml

    For PySpark see How can I declare a Column as a categorical feature in a DataFrame for use in ml

    这篇关于将元数据附加到 Spark 中的向量列的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

    查看全文
    登录 关闭
    扫码关注1秒登录
    发送“验证码”获取 | 15天全站免登陆