堆管理是线程本地的吗? [英] Is heap management thread-local?

查看:40
本文介绍了堆管理是线程本地的吗?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

大家好,

我知道标准并不关心线程(错误地:-)


但是在当前的编译器实现中,是]清单"哪个持有数量

被占用的和免费的堆地址在各种线程之间是否共享?


我不知道我是否清楚:


案例1:列出了免费和占用内容的数据

在所有线程中共享。在这种情况下,每个

malloc / new / free / delete需要获取一个互斥锁。


案例2:列表(几乎)线程本地:不需要互斥锁在大多数电话中,

malloc / new / free / delete实现。所以并行

访问malloc / new很快


在后一种情况下,当然应该有一些堆地址范围

保留供thread1使用,有些是为thread2保留的......

因此它们通常不会发生冲突。如果某个线程用mallocs填充其地址

范围,那么它必须使用互斥量并重新排列专用于各种线程的

地址范围,以便它可以得到一些

下一个mallocs的堆积范围......

我需要知道答案来评估是否快速

分配/解除分配它应该明智地使用类似

分配器池的东西。



哦另一个问题:各种不同的分配器功能

类自动生成?在地址

范围内划分

堆(或专用于一个线程的堆的部分)似乎是明智的,并且每个地址范围应该用于只有一个班级。在

这种方式下,相同类型的各种对象的分配将是连续的,并且内存永远不会被分段。当然,如果一个

类完成其堆范围,则需要重新分配堆范围与

其他对象。

TIA

解决方案

John Doe写道:

大家好,
我知道标准没有关心线程


所以你知道你在这里偏离主题。

(错误地:-)


取决于视图。

但是在当前的编译器实现中,是列表。各种
线程中是否包含已占用和空闲堆地址的数量?


取决于实施,但我想这是在大多数实施的b $ b $。
我不知道我是否真的很清楚:

案例1:列出了所有线程中共享空闲和被占用内容的列表
。在这种情况下,每个malloc / new / free / delete都需要获取一个互斥锁。


或者使用各种其他同步技术之一,其中一些在这种情况下比互斥量更好(效率更高)。

案例2:列出(几乎)线程本地:对于大多数调用,不需要在malloc / new / free / delete实现中使用互斥锁。因此,对malloc / new的并发访问很快

在后一种情况下,当然应该有一些堆地址范围保留供thread1使用,有些保留用于
thread2 ...所以他们通常不会发生冲突。如果某个线程用malloc填充其地址范围,那么它必须使用互斥量并重新排列专用于各种线程的地址范围,以便它可以获得更多的堆范围下一个mallocs ...


如果已经有一些

,它怎么能重新排列地址空间呢?

我需要知道答案,以评估是否为了快速分配/解除分配,使用像
分配器池这样的东西应该是明智的。


那里有很多malloc实现,并且可能有一些在多线程环境中非常有效的
。只是

谷歌为他们。

哦另一个问题:是不同的分配器函数为自动生成的各种类?在
地址范围内将堆(或专用于一个线程的堆的部分)划分为
地址范围似乎是明智的,并且每个地址范围仅应用于一个类。通过这种方式,相同类型的各种对象的分配将是连续的,并且内存永远不会被分段。当然,如果一个类完成其堆范围,则必须重新分配
堆范围与其他对象。




同样,我想知道如何重新分配已经使用的内存。


>>大家好,

< blockquote class =post_quotes>我知道标准并不关心线程

所以你知道你在这里偏离主题。



我以为我已经发布到comp.lang.c ++!

与comp.std.c ++有什么不同呢?


怎么可能呢重新排列地址空间如果已经有了
吗?




你当然重新分配* free *空间。

如果所有线程都用完了分配给

的所有地址范围,那么就没有办法用另一种东西来进行malloc!


假设你有3个线程。

您将所有堆空间分成3地址范围并为每个范围分配




如果thread1使用了全部,并且线程2和3仍然具有mallocated

什么都没有,然后在thread1上的下一个malloc它将获得互斥量

(或者不管它是什么),在

三个线程中平均重新分配全局可用空间( thread1将拥有1/3 + 2/9的堆空间,并非所有

连续,其他两个将拥有2/9,连续),释放

mutex然后是malloc。


当然删除/免费功能需要更聪明一点:如果删除的

线程与删除的不一样已经过mallocated的线程,它需要获取互斥锁并在释放内存之前转到另一个线程的列表,但它只会发生在少数案件,

,从来没有做过malloc / new。


John Doe写道:

< blockquote class =post_q uotes all>
我知道标准并不关心线程



所以你知道你没有 - 这里的主题。



我以为我已经发布到comp.lang.c ++!
与comp.std.c ++有什么不同呢?




comp.std.c ++讨论了C ++标准。


comp.lang.c ++讨论了C ++标准定义的C ++语言。


这是偏离主题的。对这个主题的任何有意义的讨论都会需要一个这个

组不提供的上下文(例如特定的线程实现)。


-Kevin

-

我的电子邮件地址有效,但会定期更改。

要联系我,请使用以下地址:最近的帖子。


Hi all,
I know the standard doesn''t care about threads (wrongly :-)

But in current compilers implementation, is the "list" which holds count
of the occupied and free heap addresses SHARED among various threads or not?

I don''t know if I was clear:

Case1: the list which holds count of what is free and what is occupied
is SHARED among all the threads. In this case each
malloc/new/free/delete needs to acquire a mutex.

Case2: list (almost) thread local: no need for mutexes inside the
malloc/new/free/delete implementations, for most calls. So concurrent
access to malloc/new is fast

In this latter case of course there should be some heap address ranges
which are reserved for use by thread1, some are reserved for thread2...
so they don''t usually conflict. If some thread fills up its address
range with mallocs, then it has to take a mutex and rearrange the
address ranges dedicated to the various threads so that it can get some
more heap range for the next mallocs...
I need to know the answer to evaluate if for fast
allocations/deallocations it should be wise to use something like an
allocator pool.


Oh another question: are distinct allocator functions for the various
classes automatically generated? It would seem wise to me to divide the
heap (or the section of the heap dedicated to one thread) in address
ranges, and each address range should be used for one class only. In
this way the allocation for various objects of the same type would be
contiguous and the memory would never be fragmented. Of course if one
class finishes its heap range, a reassignment of the heap ranges with
the other objects would have to be made.
TIA

解决方案

John Doe wrote:

Hi all,
I know the standard doesn''t care about threads
So you know you''re off-topic here.
(wrongly :-)
Depends on the view.
But in current compilers implementation, is the "list" which holds
count of the occupied and free heap addresses SHARED among various
threads or not?
Depends on the implementation, but I guess it is in most
implementations.
I don''t know if I was clear:

Case1: the list which holds count of what is free and what is occupied
is SHARED among all the threads. In this case each
malloc/new/free/delete needs to acquire a mutex.
Or use one of the various other synchronization techniques, of which
some are in this case a lot better (more efficient) than mutexes.
Case2: list (almost) thread local: no need for mutexes inside the
malloc/new/free/delete implementations, for most calls. So concurrent
access to malloc/new is fast

In this latter case of course there should be some heap address ranges
which are reserved for use by thread1, some are reserved for
thread2... so they don''t usually conflict. If some thread fills up its
address range with mallocs, then it has to take a mutex and rearrange
the address ranges dedicated to the various threads so that it can get
some more heap range for the next mallocs...
How could it rearrange the address space if there is already something
in it?
I need to know the answer to evaluate if for fast
allocations/deallocations it should be wise to use something like an
allocator pool.
There are quite a lot of malloc implemenations out there, and there may
be some that are quite efficient in multithreading environments. Just
google for them.
Oh another question: are distinct allocator functions for the various
classes automatically generated? It would seem wise to me to divide
the heap (or the section of the heap dedicated to one thread) in
address ranges, and each address range should be used for one class
only. In this way the allocation for various objects of the same type
would be contiguous and the memory would never be fragmented. Of
course if one class finishes its heap range, a reassignment of the
heap ranges with the other objects would have to be made.



Again, I wonder how you can reassign memory that is already in use.


>>Hi all,

I know the standard doesn''t care about threads

So you know you''re off-topic here.


I thought I had posted to comp.lang.c++!
What''s the difference with comp.std.c++ then??


How could it rearrange the address space if there is already something
in it?



You reassign the *free* space of course.
If all threads have used up all the address ranges that was assigned to
them, there is no way to malloc another thing!

Suppose you have 3 threads.
You divide all the heap space in 3 address ranges and assign each range
to a thread.

If thread1 uses all of it, and thread 2 and 3 still have mallocated
nothing, then at the next malloc on thread1 it will acquire the mutex
(or whatever it is), reassign globally free space equally among the
three threads (thread1 will own 1/3 + 2/9 of the heap space, not all
contiguous, the other two will own 2/9 each, contiguous), release the
mutex and then malloc.

Of course the delete/free function needs to be a little smarter: if the
thread who deletes is not the same as the thread who has mallocated, it
needs to acquire the mutex and go to the list of the other thread before
releasing the memory, but it would happen only in a minority of cases,
and never when doing malloc/new.


John Doe wrote:

Hi all,
I know the standard doesn''t care about threads



So you know you''re off-topic here.



I thought I had posted to comp.lang.c++!
What''s the difference with comp.std.c++ then??



comp.std.c++ discusses the C++ standard.

comp.lang.c++ discusses the C++ language as defined by the C++ standard.

This is off-topic here. Any meaningful discussion of this topic would
need a context (such as a particular threading implementation) that this
group does not provide.

-Kevin
--
My email address is valid, but changes periodically.
To contact me please use the address from a recent posting.


这篇关于堆管理是线程本地的吗?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆