如何使线程在创建它们的函数之外工作 [英] How to keep threads working outside the the function which created them

查看:92
本文介绍了如何使线程在创建它们的函数之外工作的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

您好,



我在主线程中有一个事件处理程序,它接收来自外部进程的事件。收到新事件后,将创建一个新线程来处理该事件。问题是每当我在事件处理程序中创建一个新线程时,它将在事件处理程序完成时被销毁。为了解决这个问题,在创建线程之后,我调用它的join函数,但这会导致具有事件处理程序的主线程阻塞其执行,直到线程完成其作业。实际上,这将转换回单个线程案例。每个事件的位置,当前一个线程被销毁时会创建一个新线程。



有关更多说明,请检查以下代码:

Hello,

I have an event handler in the main thread, which receive events from outside process. And upon the receipt of a new event, a new thread is created to handle the event. The problem is whenever I create a new thread inside the event handler, it will be destroyed when the event handler finishes. To solve this problem, after the creation of the thread, I am calling its join function, but this will cause the main thread which has the event handler to block its execution until the thread finishes its job. And actually this will be converted back to a single thread case. Where for each event, a new thread is created when the previous, thread is destroyed.

For more explanation ,please check the code below:

void ho_commit_indication_handler(message &msg, const boost::system::error_code &ec)
{
.....
}

void event_handler(message &msg, const boost::system::error_code &ec)
{
    if (ec)
    {
        log_(0, __FUNCTION__, " error: ", ec.message());
        return;
    }

    switch (msg.mid())
    {
        case n2n_ho_commit:
        {
            boost::thread thrd(&ho_commit_indication_handler, boost::ref(msg), boost::ref(ec));
            thrd.join();
        }
        break
    }
};



所以我的问题是,如何通过一个单独的线程处理每个事件并保持线程活着,即使主线程退出event_handler?



注意:我正在使用Boost 1.49库





如果你说的是正确的话。那么为什么会出现以下示例:




So my question is, how to handle each event through a separate thread and keep the thread alive even if the main thread exits the event_handler?

Note: I am using Boost 1.49 library


If what you're saying is correct. So why this happens in the following example:

#include <iostream>       // std::cout
#include <thread>         // std::thread
#include <chrono>

void foo()
{
    std::chrono::milliseconds dura( 2000 );
    std::this_thread::sleep_for( dura );
    std::cout << "Waited for 2Sec\n";
}
 
void bar(int x)
{
    std::chrono::milliseconds dura( 4000 );
    std::this_thread::sleep_for( dura );
    std::cout << "Waited for 4Sec\n";
}
 
int main()
{
  std::thread first (foo);
  std::thread second (bar,0);
 
  return 0;
}





你说线程还活着。我得到以下输出:

终止调用没有活动异常

中止(核心转储)



这个发生的原因是主线程终止,在线程终止之前



非常感谢。



You're saying the thread is alive. I get the following output:
terminate called without an active exception
Aborted (core dumped)

This happened because the main thread has terminated, before the thread terminations

Thanks a lot.

推荐答案

这里是我的观点:线程是一个残酷的难题,不幸的是我不得不同意谢尔盖,你不明白。但是不要失望,即使是最优秀的程序员也需要花费数年的时间来掌握它,而且一些(大多数)程序员从不掌握它(有时取决于他们的专业,他们选择的语言/范例,他们可能不需要知道线程处理所有...)。线程难度太大是不够的,你会发现许多与线程有关的材料/教程,其中99%是可怕的和丑陋的并使用反模式。



一些提示:

- 一个写得很好的多线程程序总是加入创建的线程,通常一个线程由同一个所有者线程(通常是主线程)创建/连接。我完全没有意义/不用在一个线程上创建一个线程然后在另一个线程上加入它。最好将线程生命周期的管理留给通常是主线程的管理器线程,或者在其他情况下,您可以隐藏线程池对象内的线程生命周期管理。

- 大多数程序需要只有少数线程,这些线程可以在程序启动时创建,并且可以在程序退出时加入/销毁。

- 每个多线程程序都可以这样编写,如果你需要分离的线程然后你的线程设计不好。发布一些多线程问题,人们将帮助整合好的设计/架构来解决您的问题。如果有人使用除crreate和join之外的线程操作函数,那么他们的设计也很糟糕。



BTW,如果你使用的是新的编译器,那么你可以使用std lib中的线程类而不是boost。不幸的是,std lib还包含完全不必要的线程函数/操作,使人们感到困惑并鼓励反模式。



编辑:关于你的实际问题:

你是每个客户端创建一个线程,你根本不知道何时加入线程...实际上你不应该创建或加入这些线程...以下声明将是一个非常大的游戏改变者:



你不应该拥有比你的cpu核心数更多的**活跃** /性能关键线程。即使您的客户端线程不是cpu研磨的,您必须以某种方式限制并行服务的客户端数量,以保护您的服务器免受资源不足的影响(例如,由于耗尽套接字句柄或内存来创建新线程甚至如果他们闲着的话)。怎么做???



只需在程序启动时创建一个具有固定数量线程的线程池,并在程序退出时停止/销毁它。通常可以在创建线程池时调整线程数。线程池由多个线程和一个公共作业队列组成。每个线程在循环中执行以下操作:它们从队列中取出作业并执行它,如果队列为空,则它们等待作业到达。 (Offtopic:我通常通过将NUM_THREADS个NULL作业数放入队列来要求线程池线程终止...)。



好​​的,一个线程池解决了/何时创建/加入线程但我们如何在遇到问题时使用它:我们将我们的工作定义为ServeClient工作。您在accept(可能是主要)线程上创建此serve-client作业,并将客户端的所有参数(例如:套接字,远程地址等)传递给此作业的构造函数。然后你在主线程上所做的就是将这个ServeClient作业放到线程池的队列中。稍后,池线程将从队列中获取此ServeClient作业并调用其Execute方法。然后在此execute方法中,您可以执行必须为此客户端执行的所有操作。当客户端被提供时,作为最后一步,作业的Execute方法可以销毁作业对象,并且线程可以从Execute返回以从队列中获取下一个ServeClient作业。



- 好的,但是如果客户端的服务持续很长时间并且其他一些ServeClient作业在队列中等待并且可能超时则该怎么办:那么你可以创建更多线程

- 如果我有更多的客户端然后线程数???你必须在某处画线...如果你不希望你的服务器由于太多的连接/线程/资源而爆炸,你不能接受/服务超过MAX_CLIENTS个数量的客户端。顺便说一下:在几百个客户端之后,多线程单线程每客户端方法并不真正有效,如果有数千个客户端,则必须在1或者异步执行网络IO(IOCP / epoll / kqueue)或者一些线程(异步IO不需要很多线程,操作系统为当前平台以优化方式为IO执行必要的线程),并且当请求使用异步IO完全到达时,您只需要执行线程池上的实际服务器端逻辑,以避免阻塞异步IO线程....有许多无数的设置来解决不同的场景,不可能列出所有这些。开始编写服务器,经过几十台服务器,你就会开始掌握我在说的内容。



现在限制客户端的最大数量,例如100。你希望能够并行服务100个客户端,然后创建一个最大线程数为100的线程池。一个正常的健康线程池可以使用以下操作:

- CreatePool(max_thread_count)

- AddJob(工作)

- StopAndJoinAllThreads()



任何更多操作都是多余的。不要使用其他线程池操作,一切都可以用这个解决,这个线程池接口甚至可以在任何方式支持线程的最愚蠢的平台上实现。如果您需要任何其他操作,那么它只是意味着您的设计很糟糕。对于线程操作也是如此:如果您需要其他内容然后创建或加入,那么您正在以错误的方式执行某些操作。您将在线程apis中看到许多冗余的丑陋/邪恶函数(例如std :: thread / pthread implementation / winapi),例如:TerminateThread,trylock,......所有这些都是完全不必要的,有时是非常危险的函数调用在申请的情况下,他们只是邪恶,因为有吸引力的一些反模式和解决方案乍一看看起来很好,甚至可能在第二眼看到没有经验的人。如果您无法解决基本线程操作的问题,请在论坛上为您的问题寻求解决方案。 Normall你应该能够解决任何问题只是使用线程池和将作业放入线程池。在最坏的情况下,您需要2个线程池:一个执行短执行作业,最多执行与核心数一样多的线程,另外另一个线程池运行长时间运行的操作,可选地,此线程池可以是创建一个特殊的线程池。只有当你将一个长时间运行的作业添加到池中时... ...



在我看来,你永远不需要增加池中活动线程的数量,简直没用。实际上,您可以通过将N个阻塞作业放入池的队列中来临时减少活动线程数N,这些特殊作业的作用是阻止线程,直到您在作业中设置事件为止...



如果你不想为100个客户预先创建100个线程怎么办?然后你仍然可以编写一个特殊的线程池,只有在你调用AddJob()时才创建一个线程,但在我看来这不是一个好主意。通常最好在启动时预先创建线程并保持它们直到程序退出。线程创建是一项昂贵的操作(特别是如果您不限制堆栈大小),如果在程序中间(例如服务器程序)执行失败,通常无法正常处理。无论是在程序启动时还是永远都不能以这种方式失败...



我有100个线程,我只是将ServeClient作业放入线程池队列,因为连接到达...如果我知道客户端的服务在一个线程上需要很长时间并且无法将更多的ServeClient作业放入队列中,如何限制接受的客户端数量???

在这种情况下,您可以在ServeClient作业的析构函数和客户端接受器(主)线程中管理计数器。你只是原子地递增这个计数器(InterlockedIncrement / Decrement或std atomic stuff)。在ServeClient作业的析构函数中,您只是原子地递减此计数器。在你的主线程上,你只是递增它。 InterlockedIncrement操作始终返回增量的结果。在你的主线程上,当你接受连接时,你增加这个计数器,如果结果是> 100,那么你只需要关闭套接字(如果正在优雅地执行它,可能会发送太多客户端消息)并且原子减少支持计数器并接受下一个客户。 (您可以在拥有100个活动客户端时使用更复杂的解决方案进行睡眠...)。如果主线程递增计数器并且结果为< = 100,则只需为连接创建ServeClient作业并将其放入队列。



你得到了吗?现在呢? :-)



编辑#2:分离线程:使用分离线程的方式使用它们(创建它们并让它们在没有监督的情况下进入野外...... )绝对是一个错误的解决方案。我仍然可以告诉你我使用它们的情况。我曾经提到过,有时候我会使用一个特殊的线程池来为每个作业创建一个线程,当你将作业添加到池中并且新线程执行作业然后自行终止/消失时。除此之外,当您尝试终止/删除线程池本身时,您必须能够加入/等待所有当前运行的线程(在线程池的StopAndJoinAllThreads()方法中)。

注意:如果您创建一个可连接的posix线程(包括具有相同规则的std :: thread)然后你必须加入它一次。如果你不这样做,那么你会泄漏一小块内存,其中包含该线程的exitcode(以及其他一些信息)。请注意,在执行作业后线程自行终止的线程池中,线程无法自行连接以避免此泄漏,并且池中没有另一个线程可以执行此线程句柄清理(好吧,一些转储线程池)实现有这个所谓的管理器线程,避免使用管理器线程这样的哑实现。由于这个原因,最好在默认情况下将此线程创建为分离(使用pthread api,您可以立即将其创建为已分离,使用std :: thread和winapi,您可以在创建线程后分离/关闭句柄)。好的,但现在如何处理主线程想要在程序退出之前删除线程池的情况?它必须以某种方式加入特殊线程池中仍在运行的线程!因此,您可以将初始化为零的线程池中的原子计数器与默认设置为false的可触发事件保持在一起。您还必须确保当您退出主线程时,当您要在线程池上调用StopAndJoinAllThreads()时,没有正在使用线程池的活动线程。



你的threadpool.StopAndJoin()做的是以下内容:

- AtomicDecrement()计数器,如果结果是-1则没有运行线程。

- 如果AtomicDecrement()的结果为零或大于至少有一个仍在运行的线程。在这种情况下,您必须等待事件被解雇。



每次添加新工作时,您必须执行以下操作:

- AtomicIncrement()计数器,创建线程并给它工作。



在线程退出时,线程应该在从低级别实际返回之前执行以下操作线程函数:

- AtomicDecrement()计数器,如果它是-1那么它意味着threadpool.StopAndJoin()是wating,这实际上是终止的最后一个线程所以让我们设置事件对象哪个StopAndJoin()正在等待。



正如你在这里看到的,我使用了一个分离的线程,但我无法告诉你另一个我正在使用分离线程的场景即使在这种情况下,它们也隐藏在线程池中,实际上所有这些都保证在线程池销毁之前加入。





其他坏消息出现在我的脑海中,关于使用经理线程的哑线程:通常带有管理器线程的线程池通常使用这个特殊的管理器线程,而不是因为像前一个问题一样的句柄清理,而是因为他们试图创建一个过于复杂的超级超级一体化通用实际上可以执行短期和长期运行作业的线程池。在这种情况下,您遇到的问题是,如果用户尝试添加大量长时间运行的线程,则用户可以使用长时间运行的作业到达threadcount,这会导致队列中的一些短时间运行的作业饿死/超时。出于这个原因,超级智能管理器线程检测到这一点(实际上它检测到在最后x毫秒内没有线程完成)并且它产生了一些新线程。为什么这么糟糕?正如我所说,thraedpool必须只知道这些:Create(maxthreads),addjob(job),StopAndJoin()。即使使用这个小功能,线程编程也很复杂,将许多带有管理器线程的超智能AI代码放入线程池中会增加很多复杂性,而且几乎所有这些实现都会遇到竞争条件和错误,因为谁知道什么线程编程永远不会看那些实现,那些选择它们的人实际上是选择它们,因为他们对多线程不太了解。好的,如何优雅地解决以前的问题。正如我在这种情况下告诉你的那样,你必须使用两个池:一个用于短任务(最多为numcores线程数),因为短任务池通常会磨碎核心而你应该有一个线程池用于长任务 - 在特殊情况下你如果你有充分的理由可以拥有更多。您不需要人工智能来决定何时创建线程。您可以在编写程序时确定哪些作业很短很长。如果你不能然后:1。)不要编写线程代码。 2.)在生产环境中编写多线程代码之前学习多线程。 : - )
Here is my opinion: Threading is a brutally difficult topic and unfortunately I have to agree with Sergey that you don't get it. But don't be disappointed, it takes years even for the best programmers to grasp it and some (most) programmers never grasp it (and sometimes depending on their specialty, language/paradigm of their choice they may not need to know about threading at all...). It isn't enough that threading is hell difficult, you will find a lot of material/tutorial related the threading and 99% of them is terrible and ugly and uses antipatterns.

Some hints:
- A well written multithreaded program always joins the created threads and usually a thread is created/joined by the same "owner" thread (that is usually the main thread). I is totally pointless/needless to create a thread on one thread and then joining it on another. Its better to leave the management of thread lifecycle to a "manager" thread that is usually the main thread or in other cases you can hide the thread lifecycle management inside a thread pool object.
- Most programs need only a few threads and these thread can be created on program startup and can be joined/destroyed at program exit.
- Every multithreaded program can be written this way, if you need detached threads then your threading design is not good. Post some multithreaded problems and people will help in putting together a good design/architecture to solve your problem. If someone uses thread manipulation functions other than "crreate" and "join" then their design is also bad.

BTW, if you are using a new compiler then you can use threading classes from the std lib instead of boost. Unfortunately the std lib also contains totally unnecessary threading functions/operations that confuse people and encourage antipatterns.

About your actual problem:
You are creating one thread per client and you simply don't know when to join the threads... Actually you should neither create nor join those threads... The following statement will be a very big game changer here:

You shouldn't have more **active**/performance critical threads than the number of cores in your cpu. Even if your client threads are not cpu grinding ones you MUST somehow limit the number of clients to serve in parallel to protect your server from blowing up in lack of resource (for example because of running out of socket handles or memory to create new threads even if they are idle). How to do this???

Just create a thread pool with a fixed number of threads at program startup and stop/destroy it at program exit. The number of threads can usually be tweaked at threadpool creation. A threadpool consists of several threads and a common job queue. Every thread does the following in a loop: they pull out a job from the queue and execute it, if the queue is empty then they wait until a job arrives. (Offtopic: I usually ask the thread pool threads to terminate by putting NUM_THREADS number of NULL jobs to the queue...).

OK, A thread pool solves how/when to create/join the threads but how can we utilize it in case of your problem: We define our job as a "ServeClient" job. You create this serve-client job on your accept (maybe the main) thread and you pass all parameters (eg: socket, remote address, ...) of the client to the constructor of this job. Then all you do on the main thread is putting this "ServeClient" job to the queue of the thread pool. At some later point a pool thread will grab out this "ServeClient" job from the queue and call its "Execute" method. Then in this execute method you can perform everything that has to be done for this client. When the client is served then as a last step the Execute method of the job can destroy the job object and the thread can return from Execute to grab out the next ServeClient job from the queue.

- OK but what to do if the servicing of the client lasts for a long time and some other ServeClient jobs are waiting in the queue and maybe time out: Then you can create more threads
- What if I have even more clients then the number of threads??? You have to draw the line somewhere... You can not accept/serve more than MAX_CLIENTS number of clients if you don't want your server to blow up as a result of too many connections/threads/resources. By the way: the multithreaded one-thread-per-client approach doesn't really work effectively after a few hundred clients, in case of thousands of clients you have to perform the network IO asynchronously (IOCP/epoll/kqueue) on 1 or a few threads (async IO doesn't need many threads and the OS does the necessary threading for you in case of IO in an optimized way for the current platform) and when a request fully arrived using async IO then you have to perform only the actual server side logic on a thread pool in order to avoid blocking the async IO thread.... There are so many countless setups to solve different scenarios that it is impossible to list all of them. Start writing servers and after a few ten servers you will start to grasp what I was talking about.

For now limit the max number of clients for example to 100. If you want to be able to serve 100 clients in parallel then create a threadpool with a max thread count of 100. A normal nice healthy thread pool has the following operations available:
- CreatePool(max_thread_count)
- AddJob(job)
- StopAndJoinAllThreads()

Any more operations are redundant. Don't use other thread pool operations, everything can be solved with this and this threadpool interface can be implemented even on the dumbest platform that supports threads in any way. If you need any other operations then it simply means that your design is bad. The same is true for thread operations: If you need something else then create or join then you are doing something in a wrong way. You will see a lot of redundant ugly/evil functions in threading apis (like std::thread/pthread implementations/winapi), for example: TerminateThread, "trylock", ... All these are totally unnecessary and sometimes very dangerous function calls in case of an application and they are just evil because of making attractive some antipatterns and solutions that look good at first glance, maybe even at second glance for the unexperienced. If you cant solve a problem with basic threading operations then ask for a solution for your problem on a forum. Normall you should be able to solve any problems just with thread pools and by placing jobs into thread pools. In worst case you need 2 thread pools: One that performs short executing jobs on at most as many thread as the number of cores, and additionally another thread pool that runs long running operations, optionally this thread pool can be a special one that creates a thread only when you add a long running job into the pool...

In my opinion you will never need to increase the number of active threads in a pool, that is simply useless. You can actually decrease the number of active threads temporarily by N by putting N number of blocking jobs into the queue of the pool and what these special jobs do is blocking the thread until you set an event in the job...

What to do if you don't want to precreate 100 threads for 100 clients? Then you can still write a special threadpool that creates a thread only when you call AddJob() but in my opinion this is not a good idea. Its usually better to precreate threads at startup and keep them till program exit. Thread creation is an expensive operation (especially if you don't limit the stack sizes) and usually impossible to handle gracefully if it fails at the middle of program (for example server program) execution. Its better to fail this way either at program startup or never...

I have 100 threads and I'm just placing ServeClient jobs into the threadpool queue as connections arrive... How to limit the number of accepted clients if I know that the servicing of a client takes long on a thread and it has no point of putting any more ServeClient jobs to the queue???
In this case you can manage a counter in the destructor of ServeClient jobs and in your client acceptor (main) thread. You are just incrementing this counter atomically (InterlockedIncrement/Decrement or std atomic stuff). In the destructor of your ServeClient job you are just decrementing this counter atomically. On your main thread you are just incrementing it. The InterlockedIncrement operation always returns the result of the increment. On your main thread when you accept a connection you increment this counter and if the result is >100 then you just close the socket (maybe send a too many clients message if you are doing it gracefully) and atomic decrement back the counter and accept the next client. (You can also sleep while you have 100 active clients with a bit more complex solution...). If your main thread increments the counter and the result is <=100 then just create the ServeClient job for the connection and put it to the queue.

Do you get it now? :-)

EDIT #2: Detached threads: Using detached threads the way you are using them (creating them and letting them go into the wild without supervision...) is definitely a wrong solution. I can still tell you a case where I use them. I mentioned you that sometimes I use a special threadpool that creates a thread for every job exactly when you add the job to the pool and the new thread executes the job and then terminates/disappears by itself. Besides this when you try to terminate/delete the threadpool itself you must be able to join/wait all currently running threads (in the StopAndJoinAllThreads() method of the threadpool).
Note: If you create a joinable posix thread (including the std::thread that has this same rule) then you MUST join it exactly once. If you don't do this then you leak a small piece of memory that holds the exitcode (and maybe some other info) of the thread. Note that in a threadpool where the thread terminates by itself after executing the job the thread cant simply join itself to avoid this leak and you don't have another thread in the pool that could do this thread-handle cleanup (ok, some dump threadpool implementations have this so called "manager thread", avoid such dumb implementations with a "manager thread"). For this reason it is better to create this thread as detached by default (with pthread api you can immediately create it as detached, with std::thread and winapi you can detach/close the handle after thread creation). OK, but now how to handle the case when the main thread wants to delete the thread pools before program exit? It must somehow join the still running threads in the special thread pool! For this reason you can keep an atomic counter in the threadpool that is initialized to zero along with a triggerable event set to false by default. You must also make sure that when you exit your main thread there are no alive threads that are using the threadpool when you are about to call StopAndJoinAllThreads() on the thread pool.

What your threadpool.StopAndJoin() does is the following:
- AtomicDecrement() the counter and if the result is -1 then there are not running threads.
- If the result of AtomicDecrement() is zero or more than there are at least one threads that are still running. In this case you have to wait for the event to be fired.

Every time a new job is added you have to do the following:
- AtomicIncrement() the counter, create the thread and give it the job.

On thread exit the thread should do the following before actually returning from its low level thread function:
- AtomicDecrement() the counter and if it is -1 then it means that threadpool.StopAndJoin() is wating and this is actually the last thread that terminates so lets set the event object on which StopAndJoin() is waiting.

As you see here I used a detached thread but I couldn't tell you another scenario where I'm actually utilizing detached threads and even in this case they are hidden inside a thread pool and actually all of them are guaranteed to be joined before threadpool destruction.


Some other bad news came to my mind about dumb threadpools with manager threads: The threadpools that usually come with a manager threads are usually make use of this special manager thread not because of handle cleanup like in case of the previous problem but because they are trying to create an overcomplicated hyper-super-all-in-one general threadpool that can actually execute both short and long running jobs. In this case you have a problem that if a user tries to add a lot of long running threads then the user can reach the threadcount with long running jobs and this causes some short running jobs in the queue to "starve/timeout". For this reason the super-intelligent manager thread detects this (actually it detects that no thread has finished in the last x milliseconds) and it spawns some new threads. Why is this bad? As I said a thraedpool has to know only these: Create(maxthreads), addjob(job), StopAndJoin(). Thread programming is complicated enough even with this little functionality, putting a lot of super-intelligent AI code with manager threads into a threadpool increases complexity a lot and almost all of these implementations suffer of race conditions and bugs that will be found by noone because those who know what thread programming is will never look at those implementations and those who chose them are actually choosing them because they dont know much about multithreading. OK, how to solve the previous problem elegantly. As I told you in such a scenario you have to use two pools: one for short tasks (with at most numcores number of threads) because short task pools usually grind the cores and you should have a threadpool for long tasks - in special cases you can have even more if you have a good reason. You don't need artificial intelligence to decide when to create a thread. You can decide what jobs are short and long when you are writing your program. If you cant then: 1.) dont write threaded code. 2.) learn multithreading before writing multithreaded code in a production env. :-)


这个问题毫无意义:线程总是在创建它们的函数之外执行。更确切地说,线程和函数是正交的:任何函数都可以执行多个线程,每个线程可以调用多个函数(如在单线程编程中)。调用/返回,函数参数和局部变量的结构基于堆栈;并且每个线程都有自己独立的线程。



现在,让我们看看你在做什么。您创建一些线程,然后加入相同的线程。这意味着调用线程,即调用函数 join 的调用线程处于等待状态,即切换并且不调度回执行直到它被唤醒。唤醒工作线程的一个事件是完成您尝试加入的线程。换句话说,调用线程正在休眠,另一个线程正在执行到最后,而调用线程则恢复执行。它只是意味着两个线程一个接一个地工作,从不做任何并行的事情。反过来,这意味着你完全打败了线程的目的



我不想回答如何保持线程正常工作在创造它们的功能之外。这不是你想要的。我可能只认为你想要它,但事实上,这个问题根本没有任何意义。你真正想要的是了解线程,目的和用途。现在,你没有线索,甚至没有关闭。我建议你去了解它;这非常重要。



-SA
The question makes no sense: the threads are always executed outside the function which created them. More exactly, threads and functions are orthogonal: any function can be executed be several thread, each thread can call several functions (as in single-threaded programming). The structure of the calls/returns, function parameters and local variables are based on the stack; and each thread has its own separate thread.

Now, let's see what you are doing. You create some thread and then join the same thread. It means that the calling thread, the one which called the function join is put in the wait state, that is, switched of and not scheduled back to execution until it is waken up. One of the events to wake up the working thread is completion of the thread you are trying to join. In other words, calling thread is sleeping, another thread is being executed to the very end, and than calling thread resumes the execution. It simply means that the two threads work one after another, never do anything in parallel. In turn, it means that you totally defeat the purpose of threading.

I don't want to answer "How to keep threads working outside the the function which created them". This is not what you want. I may only think that you want it, but in fact, this question does not make any sense at all. What you really want is to understand what are threads, their purpose and usage. Right now, you don't have a clue, not even close. I would advise to learn about it; this is very important.

—SA


这篇关于如何使线程在创建它们的函数之外工作的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆