Paramiko Sessions 关闭子进程中的传输 [英] Paramiko Sessions Closes Transport in the Child Process

查看:65
本文介绍了Paramiko Sessions 关闭子进程中的传输的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我们正在使用 paramiko 制作一个连接库,该库大量使用其 get_ptyinvoke_shell 功能.我们的图书馆使用这些渠道与目标设备进行交互.

We are using paramiko to make a connection library which heavily uses its get_pty or invoke_shell features. Our library uses these channels for interacting with the target device.

但是每当我们使用 multiprocessing 库时,我们都无法在子进程中使用 paramiko 连接句柄.transport 在子进程中关闭.

But whenever we use multiprocessing library, we are not able to use paramiko connection handles in the child process. The transport gets closed in the child process.

Is there a way to tell paramiko not to close the connection/channel at fork. 

这是重现问题的示例程序

This is the sample program for reproducing the problem

from paramiko import SSHClient, AutoAddPolicy
from multiprocessing import Process
import logging
log = logging.getLogger("paramiko.transport").setLevel(1)

client = SSHClient()

client.set_missing_host_key_policy(AutoAddPolicy())

client.connect(hostname="localhost")

def simple_work(handle):
    print("==== ENTERED CHILD PROCESS =====")
    stdin, stdout, stderr = handle.exec_command("ifconfig")
    print(stdout.read())
    print("==== EXITING CHILD PROCESS =====")

p = Process(target=simple_work, args=(client,))
p.start()
p.join(2)
print("==== MAIN PROCESS AFTER JOIN =====")
stdin, stdout, stderr = client.exec_command("ls")
print(stdout.read())

这是错误

==== ENTERED CHILD PROCESS =====
Success for unrequested channel! [??]
==== MAIN PROCESS AFTER JOIN =====
Traceback (most recent call last):
  File "repro.py", line 22, in <module>
    stdin, stdout, stderr = client.exec_command("ls")
  File "/Users/vivejha/Projects/cisco/lib/python3.4/site-packages/paramiko/client.py", line 401, in exec_command
    chan = self._transport.open_session(timeout=timeout)
  File "/Users/vivejha/Projects/cisco/lib/python3.4/site-packages/paramiko/transport.py", line 702, in open_session
    timeout=timeout)
  File "/Users/vivejha/Projects/cisco/lib/python3.4/site-packages/paramiko/transport.py", line 823, in open_channel
    raise e
paramiko.ssh_exception.SSHException: Unable to open channel.

需要注意的几个重要事项

Few important things to note

  1. 如果我尝试访问子进程中的 client.首先,它根本不起作用.

  1. If I try to access the client in the child process. First of all it does't work at all.

其次,主进程中的句柄也出人意料地消亡了.我不知道如何促进这种孩子与父母之间的交流以及为什么.

Secondly, the handle in the main process also dies out surprisingly. I don't how this child-to-parent communication is facilitated and why.

最大的问题是程序最终挂起,异常很好,但最不希望挂起.

And the biggest problem is program hangs in the end, exception is fine but hangs are least expected.

如果我不在子进程中使用 client,而是做一些其他工作,那么父进程中的 client 不会受到影响,并且照常工作.

If I don't use the client in the child process, and do some work other work then the client in the parent process is not impacted and works as usual.

注意:transport.py 中有一个叫做 atfork 的东西,它声称可以控制这种行为.但令人惊讶的是,即使在该方法中注释代码也没有影响.在 paramiko 的整个代码库中也没有对 atfork 的引用.

NOTE: There is something called atfork inside the transport.py which claims to control this behaviour. But surprisingly even commenting the code in that method has no impact. Also there are no references to atfork in the entire codebase of paramiko.

PS:我使用的是最新的 paramiko 并且这个程序是在 Mac 上运行的

PS: I am using latest paramiko and this program was run on Mac

推荐答案

fork 涉及到套接字只是一个基本问题.两个进程共享同一个套接字,但只有一个进程可以使用它.想象一下,两个不同的进程正在管理一个套接字.他们都处于不同的状态,例如一个可能向远程端发送和接收数据,而另一个则处于完全不同的加密状态.想想随机数/初始化向量,当两个进程发散时,它们只会在分叉后无效.

It is just a fundamental problem when sockets are involved with fork. Both processes share the same socket but only one can use it. Just imagine that two different processes are managing one socket. They both are in different states e.g. one might send and receive data to the remote side while the other one is in a totally different crypto state. Just think about nonces/initialization vectors, they'll just be invalid after forking when both processes diverge.

问题的解决方案显然是从 MultiProcessing 切换到 MultiThreading.这样您就只有一个在所有线程之间共享的 ssh 连接.如果你真的想使用 fork,你必须为每个 fork 创建一个新连接.

The solution to your problem is obviously to switch from MultiProcessing to MultiThreading. This way you only have one ssh connection that is shared across all threads. If you really want to use fork you'll have to fork with creating one new connection per fork.

参见 transport.py

def atfork(self):
    """
    Terminate this Transport without closing the session.  On posix
    systems, if a Transport is open during process forking, both parent
    and child will share the underlying socket, but only one process can
    use the connection (without corrupting the session).  Use this method
    to clean up a Transport object without disrupting the other process.

在 paramiko 日志中,您将看到您的父进程从远程端收到一个 SSH_DISCONNECT_MSG,并显示错误:Packetcorruption.很可能是因为父节点处于不同的加密状态并发送了服务器无法理解的数据包.

In paramiko log you'll see that your parent process receivs a SSH_DISCONNECT_MSG from the remote side with the error: Packet corrupt. Most likely due to the parent being in a different crypto state and sending a packet that the server could not understand.

DEBUG:lala:==== ENTERED CHILD PROCESS =====
DEBUG:lala:<paramiko.SSHClient object at 0xb74bf1ac>
DEBUG:lala:<paramiko.Transport at 0xb6fed82cL (cipher aes128-ctr, 128 bits) (active; 0 open channel(s))>
DEBUG:paramiko.transport:[chan 1] Max packet in: 34816 bytes
WARNING:paramiko.transport:Success for unrequested channel! [??]


DEBUG:lala:==== MAIN PROCESS AFTER JOIN =====
WARNING:lala:<socket._socketobject object at 0xb706ef7c>
DEBUG:paramiko.transport:[chan 1] Max packet in: 34816 bytes
INFO:paramiko.transport:Disconnect (code 2): Packet corrupt

这是一个使用 concurrent.futures 的基本多线程示例:>

Here's a basic MultiThreading example using concurrent.futures:

from concurrent.futures import ThreadPoolExecutor

def simple_work(handle):
    print("==== ENTERED CHILD PROCESS =====")
    stdin, stdout, stderr = handle.exec_command("whoami")
    print(stdout.read())
    print("==== EXITING CHILD PROCESS =====")

with ThreadPoolExecutor(max_workers=2) as executor:
    future = executor.submit(simple_work, client)
    print(future.result())

print("==== MAIN PROCESS AFTER JOIN =====")
stdin, stdout, stderr = client.exec_command("echo AFTER && whoami")
print(stdout.read())

还要注意,在大多数情况下,您甚至不需要引入额外的线程.Paramiko exec_command 已经产生了一个新线程并且不会阻塞,直到您尝试从任何伪文件 stdout,stderr 中读取.这意味着,您也可以只执行一些命令并稍后从 stdout 中读取.但请记住,由于缓冲区已满,paramiko 可能会停止运行.

Also note that in most cases you do not even need to introduce extra threading. Paramiko exec_command alread spawns a new thread and will not block until you try to read from any pseudofile stdout,stderr. That means, you could as well just execute a few commands and read from stdout lateron. But keep in mind that paramiko might stall due to buffers running full.

这篇关于Paramiko Sessions 关闭子进程中的传输的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆