等效于tensorflow中的np.add.at [英] Equivalent for np.add.at in tensorflow
本文介绍了等效于tensorflow中的np.add.at的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!
问题描述
如何将np.add.at语句转换为tensorflow?
How do I convert a np.add.at statement into tensorflow?
np.add.at(dW, self.x.ravel(), dout.reshape(-1, self.D))
修改
self.dW.shape是(V,D),self.D.shape是(N,D),self.x.size是N
self.dW.shape is (V, D), self.D.shape is (N, D) and self.x.size is N
推荐答案
For np.add.at
, you probably want to look at tf.SparseTensor, which represents a tensor by a list of values and a list of indices (which is more suitable for sparse data, hence the name).
以您的示例为例:
np.add.at(dW, self.x.ravel(), dout.reshape(-1, self.D))
应该是(假设dW
,x
和dout
是张量):
that would be (assuming dW
, x
and dout
are tensors):
tf.sparse_add(dW, tf.SparseTensor(x, tf.reshape(dout, [-1])))
这是假定x
的形状为[n, nDims]
(即x
是n个索引的列表",每个索引的维度为nDims
),而dout
的形状为[n]
.
This is assuming x
is of shape [n, nDims]
(i.e. x
is a 'list' of n indices, each of dimension nDims
), and dout
has shape [n]
.
这篇关于等效于tensorflow中的np.add.at的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!
查看全文