Netty在UDP服务器中丢失了许多UDP请求 [英] Lot of UDP requests lost in UDP server with Netty

查看:158
本文介绍了Netty在UDP服务器中丢失了许多UDP请求的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我用Netty编写了一个简单的UDP服务器,该服务器简单地在日志中打印出接收到的消息(帧).为此,我创建了一个简单的帧解码器解码器和一个简单的消息处理程序.我也有一个客户端,可以按顺序和/或并行发送多个请求.

I wrote a simple UDP Server with Netty that simply prints out in logs the messages (frames) received. To do that, I created a simple frame decoder decoder and a simple message handler. I also have a client that can send multiple requests sequentially and/or in parallel.

当我将客户端测试器配置为依次发送几百个请求时,它们之间的延迟很小,使用Netty编写的服务器会正​​确接收所有请求.但是目前,我增加了客户端中的并发请求数量(例如100个),加上顺序的请求和很少的重复,我的服务器开始丢失许多请求.例如,当我发送50000个请求时,仅使用简单的ChannelHandler打印输出接收到的消息时,我的服务器仅收到约49000个消息.

When I configure my client tester to send for example few hundred of requests sequentially with a small delay between them, my server written with Netty receives them all properly. But at the moment I increase the number of simultaneous requests in my client (100 for example) coupled with sequential ones and few repeats, my server starts loosing many requests. When I send 50000 requests for example, my server only receives about 49000 when only using the simple ChannelHandler that prints out the received message.

当我在此处理程序之前添加简单的帧解码器(将帧打印出来并将其复制到另一个缓冲区中)时,服务器仅处理一半的请求!

And when I add the simple frame decoder (that prints out the frame and copies it into another buffer) in front of this handler, the server only handles half of the requests!!

我注意到,无论我为创建的NioDatagramChannelFactory指定多少工人,总是只有一个线程来处理请求(我将推荐的Executors.newCachedThreadPool()用作另一个参数).

I noticed that no matter the number of workers I specify to the created NioDatagramChannelFactory, there is always one and only one thread that handles the requests (I am using the recommended Executors.newCachedThreadPool() as the other parameter).

我还基于JDK附带的DatagramSocket创建了另一个类似的简单UDP服务器,它完美地处理了每个请求,而丢失了0(零)!!当我在客户端(例如具有1000个线程)中发送50000个请求时,我在服务器中收到了50000个请求.

I also created another similar simple UDP Server based on the DatagramSocket coming with the JDK and it handles every requests perfectly with 0 (zero) lost!! When I send 50000 requests in my client (with 1000 threads for example), I received 50000 requests in my server.

使用Netty配置UDP服务器时,我做错什么了吗?也许Netty根本不是为支持这种负载而设计的?为什么给定的缓存线程池仅使用一个线程(我注意到,通过在JMX jconsole中查找并​​在输出日志中检查线程名称,仅使用一个线程,并且始终使用同一线程)?我认为,如果按预期使用更多线程,则服务器将能够轻松处理此类负载,因为当不使用Netty时,我可以毫无问题地做到这一点!

Am I doing something wrong while configuring my UDP server using Netty? Or maybe Netty is simply not designed to support such load?? Why is there only one thread used by the given Cached Thread Pool (I noticed that only one thread and always the same is used by looking in JMX jconsole and in by checking the thread name in the output logs)? I think if more threads where used as expected, the server would be able to easily handle such load because I can do it without any problem when not using Netty!

在下面查看我的初始化代码:

...

lChannelfactory = new NioDatagramChannelFactory( Executors.newCachedThreadPool(), nbrWorkers );
lBootstrap = new ConnectionlessBootstrap( lChannelfactory );

lBootstrap.setPipelineFactory( new ChannelPipelineFactory() {
    @Override
    public ChannelPipeline getPipeline()
    {
        ChannelPipeline lChannelPipeline = Channels.pipeline();
        lChannelPipeline.addLast( "Simple UDP Frame Dump DECODER", new SimpleUDPPacketDumpDecoder( null ) );            
        lChannelPipeline.addLast( "Simple UDP Frame Dump HANDLER", new SimpleUDPPacketDumpChannelHandler( lOuterFrameStatsCollector ) );            
        return lChannelPipeline;
    }
} );

bindChannel = lBootstrap.bind( socketAddress );

...

以及我的解码器中的encode()方法的内容:

protected Object decode(ChannelHandlerContext iCtx, Channel iChannel, ChannelBuffer iBuffer) throws Exception
{
    ChannelBuffer lDuplicatedChannelBuffer = null;
    sLogger.debug( "Decode method called." );

    if ( iBuffer.readableBytes() < 8 ) return null;
    if ( outerFrameStatsCollector != null ) outerFrameStatsCollector.incrementNbrRequests();

    if ( iBuffer.readable() ) 
    {        
        sLogger.debug( convertToAsciiHex( iBuffer.array(), iBuffer.readableBytes() ) );                     
        lDuplicatedChannelBuffer = ChannelBuffers.dynamicBuffer( iBuffer.readableBytes() );            
        iBuffer.readBytes( lDuplicatedChannelBuffer );
    }

    return lDuplicatedChannelBuffer;
}

以及我的处理程序中messageReceived()方法的内容:

public void messageReceived(final ChannelHandlerContext iChannelHandlerContext, final MessageEvent iMessageEvent) throws Exception
{
    ChannelBuffer lMessageBuffer = (ChannelBuffer) iMessageEvent.getMessage();
    if ( outerFrameStatsCollector != null ) outerFrameStatsCollector.incrementNbrRequests();

    if ( lMessageBuffer.readable() ) 
    {        
        sLogger.debug( convertToAsciiHex( lMessageBuffer.array(), lMessageBuffer.readableBytes() ) );            
        lMessageBuffer.discardReadBytes();
    }
}

推荐答案

您尚未正确配置ConnectionlessBootstrap实例.

You have not properly configured the ConnectionlessBootstrap instance.

  1. 您必须使用最佳值配置以下内容.

  1. You have to configure followings with optimum values.

SO_SNDBUF大小,SO_RCVBUF大小和ReceiveBufferSizePredictorFactory

SO_SNDBUF size, SO_RCVBUF size and a ReceiveBufferSizePredictorFactory

lBootstrap.setOption("sendBufferSize", 1048576);

lBootstrap.setOption("receiveBufferSize", 1048576);

lBootstrap.setOption("receiveBufferSizePredictorFactory", 
 new AdaptiveReceiveBufferSizePredictorFactory(MIN_SIZE, INITIAL_SIZE, MAX_SIZE));

检查DefaultNioDatagramChannelConfig类以获取更多详细信息.

check DefaultNioDatagramChannelConfig class for more details.

管道正在使用Netty工作线程来做所有事情.如果 工作线程超载,将延迟选择器事件循环 执行,并且在读/写 渠道.您必须按照以下步骤添加执行处理程序 管道.它将释放工作线程来执行自己的工作.

The pipeline is doing everything using the Netty work thread. If worker thread is overloaded, it will delay the selector event loop execution and there will be a bottleneck in reading/writing the channel. You have to add a execution handler as following in the pipeline. It will free the worker thread to do its own work.

ChannelPipeline lChannelPipeline = Channels.pipeline();

lChannelPipeline.addFirst("execution-handler", new ExecutionHandler(
  new OrderedMemoryAwareThreadPoolExecutor(16, 1048576, 1048576));

//add rest of the handlers here

这篇关于Netty在UDP服务器中丢失了许多UDP请求的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆