TensorFlow是否实现了稀疏张量乘法? [英] Is sparse tensor multiplication implemented in TensorFlow?

查看:345
本文介绍了TensorFlow是否实现了稀疏张量乘法?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

稀疏张量与其自身或密集张量的乘法在TensorFlow中似乎不起作用.以下示例

Multiplication of sparse tensors with themselves or with dense tensors does not seem to work in TensorFlow. The following example

from __future__ import print_function
import tensorflow as tf

x = tf.constant([[1.0,2.0],
                 [3.0,4.0]])
y = tf.SparseTensor(indices=[[0,0],[1,1]], values=[1.0,1.0], shape=[2,2])
z = tf.matmul(x,y)

sess = tf.Session()
sess.run(tf.initialize_all_variables())
print(sess.run([x, y, z]))

失败,并显示错误消息

TypeError: Input 'b' of 'MatMul' Op has type string that does not match type 
float32 of argument 'a'

两个张量都具有float32类型的值,这是通过不使用乘法op对其求值而看到的. y与自身相乘会返回类似的错误消息. x本身的乘法可以很好地工作.

Both tensors have values of type float32 as seen by evaluating them without the multiplication op. Multiplication of y with itself returns a similar error message. Multipication of x with itself works fine.

推荐答案

tf.SparseTensor的通用乘法当前未在TensorFlow中实现.但是,存在三种部分解决方案,正确的选择方案将取决于您数据的特征:

General-purpose multiplication for tf.SparseTensor is not currently implemented in TensorFlow. However, there are three partial solutions, and the right one to choose will depend on the characteristics of your data:

  • 如果您有tf.SparseTensortf.Tensor,则可以使用

  • If you have a tf.SparseTensor and a tf.Tensor, you can use tf.sparse_tensor_dense_matmul() to multiply them. This is more efficient than the next approach if one of the tensors is too large to fit in memory when densified: the documentation has more guidance about how to decide between these two methods. Note that it accepts a tf.SparseTensor as the first argument, so to solve your exact problem you will need to use the adjoint_a and adjoint_b arguments, and transpose the result.

如果您有两个稀疏张量并且需要将它们相乘,则最简单(如果不是性能最高)的方法是将它们转换为密集张量并使用tf.matmul:

If you have two sparse tensors and need to multiply them, the simplest (if not the most performant) way is to convert them to dense and use tf.matmul:

a = tf.SparseTensor(...)
b = tf.SparseTensor(...)

c = tf.matmul(tf.sparse_tensor_to_dense(a, 0.0),
              tf.sparse_tensor_to_dense(b, 0.0),
              a_is_sparse=True, b_is_sparse=True)

请注意,可选的a_is_sparseb_is_sparse参数表示"a(或b)具有密集表示,但其大量条目为零",这会触发使用不同的乘法算法.

Note that the optional a_is_sparse and b_is_sparse arguments mean that "a (or b) has a dense representation but a large number of its entries are zero", which triggers the use of a different multiplication algorithm.

对于稀疏 vector 的特殊情况(通过可能较大且分片的)密集矩阵乘法,并且向量中的值为0或1,本教程讨论了何时可以使用嵌入以及如何在嵌入中调用运算符更多细节.

For the special case of sparse vector by (potentially large and sharded) dense matrix multiplication, and the values in the vector are 0 or 1, the tf.nn.embedding_lookup operator may be more appropriate. This tutorial discusses when you might use embeddings and how to invoke the operator in more detail.

对于(可能是较大且分片的)密集矩阵稀疏的 matrix 的特殊情况,

For the special case of sparse matrix by (potentially large and sharded) dense matrix, tf.nn.embedding_lookup_sparse() may be appropriate. This function accepts one or two tf.SparseTensor objects, with sp_ids representing the non-zero values, and the optional sp_weights representing their values (which otherwise default to one).

这篇关于TensorFlow是否实现了稀疏张量乘法?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆