使用TFramedTransport时出现TTransportException [英] TTransportException when using TFramedTransport

查看:1807
本文介绍了使用TFramedTransport时出现TTransportException的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我对这个问题感到非常困惑。我有一个Apache Thrift 0.9.0客户端和服务器。客户端代码如下所示:

I'm pretty puzzled with this issue. I have an Apache Thrift 0.9.0 client and server. The client code goes like this:

this.transport = new TSocket(this.server, this.port);
final TProtocol protocol = new TBinaryProtocol(this.transport);
this.client = new ZKProtoService.Client(protocol);

这很好用。但是,如果我尝试将传输包装在 TFramedTransport

This works fine. However, if I try to wrap the transport in a TFramedTransport

this.transport = new TSocket(this.server, this.port);
final TProtocol protocol = new TBinaryProtocol(new TFramedTransport(this.transport));
this.client = new ZKProtoService.Client(protocol);

我在客户端获得以下模糊(无解释消息)异常。服务器端显示没有错误。

I get the following obscure (no explanation message whatsoever) exception in the client side. Server side shows no error.

org.apache.thrift.transport.TTransportException
    at org.apache.thrift.transport.TIOStreamTransport.read(TIOStreamTransport.java:132)
    at org.apache.thrift.transport.TTransport.readAll(TTransport.java:84)
    at org.apache.thrift.transport.TFramedTransport.readFrame(TFramedTransport.java:129)
    at org.apache.thrift.transport.TFramedTransport.read(TFramedTransport.java:101)
    at org.apache.thrift.transport.TTransport.readAll(TTransport.java:84)
    at org.apache.thrift.protocol.TBinaryProtocol.readAll(TBinaryProtocol.java:378)
    at org.apache.thrift.protocol.TBinaryProtocol.readI32(TBinaryProtocol.java:297)
    at org.apache.thrift.protocol.TBinaryProtocol.readMessageBegin(TBinaryProtocol.java:204)
    at org.apache.thrift.TServiceClient.receiveBase(TServiceClient.java:69)
    at com.blablabla.android.core.device.proto.ProtoService$Client.recv_open(ProtoService.java:108)
    at com.blablabla.android.core.device.proto.ProtoService$Client.open(ProtoService.java:95)
    at com.blablabla.simpleprotoclient.proto.ProtoClient.initializeCommunication(ProtoClient.java:411)
    at com.blablabla.simpleprotoclient.proto.ProtoClient.doWork(ProtoClient.java:269)
    at com.blablabla.simpleprotoclient.proto.ProtoClient.run(ProtoClient.java:499)
    at java.lang.Thread.run(Thread.java:724)

如果我也失败使用 TCompactProtocol 而不是 TBinaryProtocol

It also fails if I use TCompactProtocol instead of TBinaryProtocol.

在服务器中我已经使用自己的类扩展了 TProcessor ,因为我需要重用现有的服务处理程序(服务服务器端 IFace 实现)此客户端:

In the server side I have extended TProcessor with my own class since I need to reuse existing service handler (the service server-side IFace implementation) for this client:

@Override
public boolean process(final TProtocol in, final TProtocol out)
        throws TException {
    final TTransport t = in.getTransport();
    final TSocket socket = (TSocket) t;
    socket.setTimeout(ProtoServer.SOCKET_TIMEOUT);
    final String clientAddress = socket.getSocket().getInetAddress()
            .getHostAddress();
    final int clientPort = socket.getSocket().getPort();
    final String clientRemote = clientAddress + ":" + clientPort;
    ProtoService.Processor<ProtoServiceHandler> processor = PROCESSORS
            .get(clientRemote);
    if (processor == null) {
        final ProtoServiceHandler handler = new ProtoServiceHandler(
                clientRemote);
        processor = new ProtoService.Processor<ProtoServiceHandler>(
                handler);
        PROCESSORS.put(clientRemote, processor);
        HANDLERS.put(clientRemote, handler);
        ProtoClientConnectionChecker.addNewConnection(clientRemote,
                socket);
    }
    return processor.process(in, out);
}

这就是我启动服务器端的方式:

And this is how I start the server side:

TServerTransport serverTransport = new TServerSocket(DEFAULT_CONTROL_PORT);
TServer server = new TThreadPoolServer(new TThreadPoolServer.Args(
            serverTransport).processor(new ControlProcessor()));
Thread thControlServer = new Thread(new StartServer("Control", server));
thControlServer.start();

我有一些问题:


  • 重用服务处理程序实例是否正确,或者我不应该这样做?

  • 当我使用 TFramedTransport时,为什么会失败
  • code>或 TCompactProtocol ?如何解决这个问题?
  • Is it correct to reuse service handler instances or I shouldn't be doing this?
  • Why does it fail when I use TFramedTransport or TCompactProtocol? How to fix this?

欢迎任何有关此问题的帮助。在此先感谢!

Any help on this issue is welcome. Thanks in advance!

推荐答案

我遇到了同样的问题,终于找到了答案。可以在服务器上设置传输类型,但是我在网上找到的大多数教程和示例都不清楚这一点。看看 TServer.Args 类的所有方法(或其他服务器的args类,扩展 TServer.Args )。有方法 inputTransportFactory outputTransportFactory 。您可以使用 new TFramedTransport.Factory()作为每个方法的输入,以声明服务器应使用哪种传输。在scala中:

I was having the same problem and finally found the answer. It is possible to set the transport type on the server, though this is not clear from most tutorials and examples I've found on the web. Have a look at all of the methods of the TServer.Args class (or the args classes for other servers, which extend TServer.Args). There are methods inputTransportFactory and outputTransportFactory. You can use new TFramedTransport.Factory() as inputs to each of these methods to declare which transport the server should use. In scala:

  val handler = new ServiceStatusHandler
  val processor = new ServiceStatus.Processor(handler)
  val serverTransport = new TServerSocket(9090)
  val args = new TServer.Args(serverTransport)
    .processor(processor)
    .inputTransportFactory(new TFramedTransport.Factory)
    .outputTransportFactory(new TFramedTransport.Factory)
  val server = new TSimpleServer(args)
  println("Starting the simple server...")
  server.serve()

请注意,如果您使用的是 TAsyncClient ,则您无法选择运输你用的。你必须使用 TNonblockingTransport ,它只有一个标准实现, TNonblockingSocket ,它在内部包装你正在使用的任何协议有框架的运输。它实际上并没有将您选择的协议包装在 TFramedTransport 中,但它确实将帧的长度添加到它写入的内容中,并期望服务器预先设置长度的反应也是如此。这在我找到的任何地方都没有记录,但是如果你查看源代码并尝试不同的组合,你会发现使用 TSimpleServer 你必须使用 TFramedTransport 让它与异步客户端一起工作。

Note that if you are using a TAsyncClient, you have no choice about the transport that you use. You must use TNonblockingTransport, which has only one standard implementation, TNonblockingSocket, which internally wraps whatever protocol you are using in a framed transport. It doesn't actually wrap your chosen protocol in a TFramedTransport, but it does prepend the length of the frame to the content that it writes, and expects the server to prepend the length of the response as well. This wasn't documented anywhere I found, but if you look at the source code and experiment with different combinations, you will find that with TSimpleServer you must use TFramedTransport to get it to work with an async client.

顺便说一下,值得注意的是,文档说 TNonblockingServer 必须在传输的最后面使用 TFramedTransport 。但是,这些示例并未显示此设置在 TNonblockingServer.Args 中,但您仍然发现必须使用 TFramedTransport 在客户端成功执行服务器上的rpc。这是因为默认情况下 TNonblockingServer.Args 的输入和输出协议设置为 TFramedTransport (你可以看到这个使用反射检查超类层次结构的字段或 AbstractNonblockingServerArgs 的构造函数的源代码 - 您可以覆盖输入和输出传输,但服务器可能会失败文件中讨论的原因)。

By the way, it's also worth noting that the docs say that a TNonblockingServer must use TFramedTransport in the outermost later of the transport. However, the examples don't show this being set in TNonblockingServer.Args, yet you still find that you must use TFramedTransport on the client side to successfully execute an rpc on the server. This is because TNonblockingServer.Args has its input and output protocols set to TFramedTransport by default (you can see this using reflection to inspect the fields of the superclass hierarchy or in the source code for the constructor of AbstractNonblockingServerArgs -- you can override the input and output transports, but the server will likely fail for the reasons discussed in the documentation).

这篇关于使用TFramedTransport时出现TTransportException的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆