使用 Netty 的 UDP 服务器中丢失了很多 UDP 请求 [英] Lot of UDP requests lost in UDP server with Netty

查看:34
本文介绍了使用 Netty 的 UDP 服务器中丢失了很多 UDP 请求的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我用 Netty 编写了一个简单的 UDP 服务器,它只是在日志中打印出接收到的消息(帧).为此,我创建了一个简单的帧解码器解码器和一个简单的消息处理程序.我还有一个客户端可以顺序和/或并行发送多个请求.

I wrote a simple UDP Server with Netty that simply prints out in logs the messages (frames) received. To do that, I created a simple frame decoder decoder and a simple message handler. I also have a client that can send multiple requests sequentially and/or in parallel.

当我将我的客户端测试器配置为顺序发送例如几百个请求时,它们之间有一个小的延迟,我用 Netty 编写的服务器可以正确接收它们.但是,当我增加客户端中的同时请求数量(例如 100 个)以及顺序请求和少量重复请求时,我的服务器开始丢失许多请求.例如,当我发送 50000 个请求时,我的服务器仅使用打印出接收到的消息的简单 ChannelHandler 时仅收到大约 49000 个请求.

When I configure my client tester to send for example few hundred of requests sequentially with a small delay between them, my server written with Netty receives them all properly. But at the moment I increase the number of simultaneous requests in my client (100 for example) coupled with sequential ones and few repeats, my server starts loosing many requests. When I send 50000 requests for example, my server only receives about 49000 when only using the simple ChannelHandler that prints out the received message.

当我在这个处理程序前面添加简单的帧解码器(打印出帧并将其复制到另一个缓冲区)时,服务器只处理一半的请求!

And when I add the simple frame decoder (that prints out the frame and copies it into another buffer) in front of this handler, the server only handles half of the requests!!

我注意到,无论我为创建的 NioDatagramChannelFactory 指定了多少个工作线程,总是只有一个线程处理请求(我使用推荐的 Executors.newCachedThreadPool() 作为另一个参数).

I noticed that no matter the number of workers I specify to the created NioDatagramChannelFactory, there is always one and only one thread that handles the requests (I am using the recommended Executors.newCachedThreadPool() as the other parameter).

我还创建了另一个基于 JDK 附带的 DatagramSocket 的类似简单 UDP 服务器,它可以完美地处理每个请求,丢失 0(零)!当我在客户端发送 50000 个请求(例如 1000 个线程)时,我在服务器中收到了 50000 个请求.

I also created another similar simple UDP Server based on the DatagramSocket coming with the JDK and it handles every requests perfectly with 0 (zero) lost!! When I send 50000 requests in my client (with 1000 threads for example), I received 50000 requests in my server.

我在使用 Netty 配置我的 UDP 服务器时做错了什么?或者也许 Netty 根本就不是为了支持这样的负载而设计的?为什么给定的缓存线程池只使用一个线程(通过查看 JMX jconsole 并检查输出日志中的线程名称,我注意到只有一个线程并且始终使用相同的线程)?我认为如果按预期使用更多线程,服务器将能够轻松处理此类负载,因为我可以在不使用 Netty 时毫无问题地做到这一点!

Am I doing something wrong while configuring my UDP server using Netty? Or maybe Netty is simply not designed to support such load?? Why is there only one thread used by the given Cached Thread Pool (I noticed that only one thread and always the same is used by looking in JMX jconsole and in by checking the thread name in the output logs)? I think if more threads where used as expected, the server would be able to easily handle such load because I can do it without any problem when not using Netty!

请看下面我的初始化代码:

...

lChannelfactory = new NioDatagramChannelFactory( Executors.newCachedThreadPool(), nbrWorkers );
lBootstrap = new ConnectionlessBootstrap( lChannelfactory );

lBootstrap.setPipelineFactory( new ChannelPipelineFactory() {
    @Override
    public ChannelPipeline getPipeline()
    {
        ChannelPipeline lChannelPipeline = Channels.pipeline();
        lChannelPipeline.addLast( "Simple UDP Frame Dump DECODER", new SimpleUDPPacketDumpDecoder( null ) );            
        lChannelPipeline.addLast( "Simple UDP Frame Dump HANDLER", new SimpleUDPPacketDumpChannelHandler( lOuterFrameStatsCollector ) );            
        return lChannelPipeline;
    }
} );

bindChannel = lBootstrap.bind( socketAddress );

...

以及我的解码器中 decode() 方法的内容:

protected Object decode(ChannelHandlerContext iCtx, Channel iChannel, ChannelBuffer iBuffer) throws Exception
{
    ChannelBuffer lDuplicatedChannelBuffer = null;
    sLogger.debug( "Decode method called." );

    if ( iBuffer.readableBytes() < 8 ) return null;
    if ( outerFrameStatsCollector != null ) outerFrameStatsCollector.incrementNbrRequests();

    if ( iBuffer.readable() ) 
    {        
        sLogger.debug( convertToAsciiHex( iBuffer.array(), iBuffer.readableBytes() ) );                     
        lDuplicatedChannelBuffer = ChannelBuffers.dynamicBuffer( iBuffer.readableBytes() );            
        iBuffer.readBytes( lDuplicatedChannelBuffer );
    }

    return lDuplicatedChannelBuffer;
}

以及我的处理程序中 messageReceived() 方法的内容:

public void messageReceived(final ChannelHandlerContext iChannelHandlerContext, final MessageEvent iMessageEvent) throws Exception
{
    ChannelBuffer lMessageBuffer = (ChannelBuffer) iMessageEvent.getMessage();
    if ( outerFrameStatsCollector != null ) outerFrameStatsCollector.incrementNbrRequests();

    if ( lMessageBuffer.readable() ) 
    {        
        sLogger.debug( convertToAsciiHex( lMessageBuffer.array(), lMessageBuffer.readableBytes() ) );            
        lMessageBuffer.discardReadBytes();
    }
}

推荐答案

你没有正确配置 ConnectionlessBootstrap 实例.

You have not properly configured the ConnectionlessBootstrap instance.

  1. 您必须使用最佳值配置以下内容.

  1. You have to configure followings with optimum values.

SO_SNDBUF 大小、SO_RCVBUF 大小和一个 ReceiveBufferSizePredictorFactory

SO_SNDBUF size, SO_RCVBUF size and a ReceiveBufferSizePredictorFactory

lBootstrap.setOption("sendBufferSize", 1048576);

lBootstrap.setOption("receiveBufferSize", 1048576);

lBootstrap.setOption("receiveBufferSizePredictorFactory", 
 new AdaptiveReceiveBufferSizePredictorFactory(MIN_SIZE, INITIAL_SIZE, MAX_SIZE));

查看 DefaultNioDatagramChannelConfig 类以获取更多详细信息.

check DefaultNioDatagramChannelConfig class for more details.

管道使用 Netty 工作线程做所有事情.如果工作线程超载,会延迟选择器事件循环执行,并且在读/写时会有瓶颈渠道.您必须添加一个执行处理程序,如下所示管道.它将释放工作线程来完成自己的工作.

The pipeline is doing everything using the Netty work thread. If worker thread is overloaded, it will delay the selector event loop execution and there will be a bottleneck in reading/writing the channel. You have to add a execution handler as following in the pipeline. It will free the worker thread to do its own work.

ChannelPipeline lChannelPipeline = Channels.pipeline();

lChannelPipeline.addFirst("execution-handler", new ExecutionHandler(
  new OrderedMemoryAwareThreadPoolExecutor(16, 1048576, 1048576));

//add rest of the handlers here

这篇关于使用 Netty 的 UDP 服务器中丢失了很多 UDP 请求的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆