向量push_back over std :: copy [英] vector push_back over std::copy
问题描述
我有一个有无序集作为参数的函数。因为我使用openmp我把这个无序的集合转换为向量。我使用std :: copy进行这个转换。
//伪代码
func(std :: unorderedset s1 )
begin
vector v1;
std :: copy(s1.begin,s2.end,std :: back_inserter(v1.end());
#openmp scope
for(i = 0; i {
//访问v1(i)
}
end
但是我觉得std :: copy是一个代价很高的操作。所以我想是,如果我创建一个类变量向量,并继续填充这个向量,当我更新我因为向量的push_back操作的时间复杂度是摊销O(1)。你建议什么?
std :: back_insert_iterator
调用 std :: vector :: push_back
重要的是你知道 执行此操作: 或: 或者,正如@CoryKramer建议的,惯用的构造 更新: 所有三个版本都会对 第二种方式只比第一种方式慢, v1 将预先
,因此请使用该信息并使 std :: vector
分配其存储仅一次,以避免重新分配<$ v1.size()== v1.capacity()
时,
std :: vector< T& v1;
v1.reserve(s1.size());
std :: copy(s1.begin(),s2.end(),std :: back_inserter(v1));
std :: vector< T> v1(s1.size());
std :: copy(s1.begin(),s2.end(),v1.begin());
v1
来自范围:
std :: vector< T& v1(s1.begin(),s1.end());
t
的副本数量进行 s1.size()
。然而,当在 10 ^ 7
元素 T = int
的GCC上测量时, $ c> std :: vector :: reserve 是最快的方法(比范围构建快两倍,因为 std :: distance
ForwardIterator 的线性复杂性, std :: unordered_set :: size
具有常数)。当处理较少和非常大的对象时,这种差异将减少,但它仍然存在。
std :: vector :: reserve
。
I have a function which has an unordered set as a parameter . Since I am using openmp I am converting this unordered set to vector . I use a std::copy for this conversion .
//pseudo code
func( std::unorderedset s1)
begin
vector v1;
std::copy(s1.begin,s2.end,std::back_inserter(v1.end());
#openmp scope
for( i = 0 ; i < v1.size(); i++ )
{
//accessing v1(i)
}
end
However I feel std::copy is a costly operation . So what I think is, if I create a class variable vector and I keep populating this vector as and when I am updating my set , I can completely avoid this std::copy operation . Since the time complexity of push_back operation of a vector is amortized O(1). What do you suggest ?
std::back_insert_iterator
calls std::vector::push_back
, so your proposal doesn't improve anything.
What is important, is that you know the size v1
will have beforehand, so make use of that information and make std::vector
allocate its storage only once to avoid reallocations std::push_back
does when v1.size() == v1.capacity()
.
Do this:
std::vector<T> v1;
v1.reserve(s1.size());
std::copy(s1.begin(), s2.end(), std::back_inserter(v1));
or this:
std::vector<T> v1(s1.size());
std::copy(s1.begin(), s2.end(), v1.begin());
or, as suggested by @CoryKramer, idiomatically constructing v1
from a range:
std::vector<T> v1(s1.begin(), s1.end());
Update:
All three versions do the s1.size()
number of copies of T
. However, when measured on GCC with 10^7
elements of T = int
, it showed up that the std::vector::reserve
was the fastest method (twice as fast as range-construction, because of std::distance
of ForwardIterators having linear complexity versus std::unordered_set::size
having constant). This difference will diminish when dealing with fewer and very large objects, but it'll still exist.
The second way was just slightly slower than the first, because of value-initializing the elements.
Conclusion: use std::vector::reserve
.
这篇关于向量push_back over std :: copy的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!