Java,Netty,TCP和UDP连接集成:没有缓冲区空间可用于UDP连接 [英] Java, Netty, TCP and UDP connection integration : No buffer space available for UDP connection

查看:201
本文介绍了Java,Netty,TCP和UDP连接集成:没有缓冲区空间可用于UDP连接的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我有同时使用TCP和UDP协议的应用程序.主要假设是客户端通过TCP协议连接到服务器,并且在建立连接时将发送UDP数据报. 我必须支持两种连接服务器的方案: -服务器运行时客户端连接 -客户端在服务器关闭时进行连接,然后重试连接,直到服务器再次启动

I have application which uses both TCP and UDP protocols. Main assumption is that the client connects to server via TCP protocol and when connection is established, UDP datagrams are being send. I have to support two scenarios of connecting to server: - client connects when server is running - client connects when server is down and retries connection until server starts again

对于第一种情况,一切工作正常:我同时使用了两个连接. 问题出在第二种情况.当客户端尝试通过TCP进行几次连接并最终连接时,UDP连接功能将引发异常:

For the first scenario everything works pretty fine: I got working both connections. The problem is with second scenario. When client tries few times to connect via TCP and finally connects, the UDP connection function throws an exception:

java.net.SocketException: No buffer space available (maximum connections reached?): bind
at sun.nio.ch.Net.bind0(Native Method)
at sun.nio.ch.Net.bind(Net.java:344)
at sun.nio.ch.DatagramChannelImpl.bind(DatagramChannelImpl.java:684)
at sun.nio.ch.DatagramSocketAdaptor.bind(DatagramSocketAdaptor.java:91)
at io.netty.channel.socket.nio.NioDatagramChannel.doBind(NioDatagramChannel.java:192)
at io.netty.channel.AbstractChannel$AbstractUnsafe.bind(AbstractChannel.java:484)
at io.netty.channel.DefaultChannelPipeline$HeadContext.bind(DefaultChannelPipeline.java:1080)
at io.netty.channel.AbstractChannelHandlerContext.invokeBind(AbstractChannelHandlerContext.java:430)
at io.netty.channel.AbstractChannelHandlerContext.bind(AbstractChannelHandlerContext.java:415)
at io.netty.channel.DefaultChannelPipeline.bind(DefaultChannelPipeline.java:903)
at io.netty.channel.AbstractChannel.bind(AbstractChannel.java:197)
at io.netty.bootstrap.AbstractBootstrap$2.run(AbstractBootstrap.java:350)
at io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:380)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:357)
at io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:116)
at io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:137)
at java.lang.Thread.run(Thread.java:722)

当我不使用服务器执行任何操作时重新启动客户端应用程序时,客户端将出现任何问题.

When I restart client application without doing anything with server, client will connect with any problems.

什么会引起问题?

在下面,我附上类的源代码.所有源代码均来自放置在Netty官方项目页面上的示例.我唯一感到中庸的是,我用非静态变量和函数代替了静态变量和函数.导致将来我需要与多个服务器建立许多TCP-UDP连接.

In below I attach source code of classes. All source code comes from examples placed in official Netty project page. The only thing which I have midified is that I replaced static variables and functions with non-static ones. It was caused that in future I will need many TCP-UDP connections to multiple servers.

public final class UptimeClient {
static final String HOST = System.getProperty("host", "192.168.2.193");
static final int PORT = Integer.parseInt(System.getProperty("port", "2011"));
static final int RECONNECT_DELAY = Integer.parseInt(System.getProperty("reconnectDelay", "5"));
static final int READ_TIMEOUT = Integer.parseInt(System.getProperty("readTimeout", "10"));

private static UptimeClientHandler handler;

public void runClient() throws Exception {
    configureBootstrap(new Bootstrap()).connect();
}

private Bootstrap configureBootstrap(Bootstrap b) {
    return configureBootstrap(b, new NioEventLoopGroup());
}

@Override
protected Object clone() throws CloneNotSupportedException {
    return super.clone(); //To change body of generated methods, choose Tools | Templates.
}

Bootstrap configureBootstrap(Bootstrap b, EventLoopGroup g) {
    if(handler == null){
            handler = new UptimeClientHandler(this);
    }
    b.group(g)
     .channel(NioSocketChannel.class)
     .remoteAddress(HOST, PORT)
     .handler(new ChannelInitializer<SocketChannel>() {
        @Override
        public void initChannel(SocketChannel ch) throws Exception {
            ch.pipeline().addLast(new IdleStateHandler(READ_TIMEOUT, 0, 0), handler);
        }
     });

    return b;
}

void connect(Bootstrap b) {
    b.connect().addListener(new ChannelFutureListener() {
        @Override
        public void operationComplete(ChannelFuture future) throws Exception {
            if (future.cause() != null) {
                handler.startTime = -1;
                handler.println("Failed to connect: " + future.cause());
            }
        }
    });
}
}


@Sharable
public class UptimeClientHandler extends SimpleChannelInboundHandler<Object> {
UptimeClient client;
public UptimeClientHandler(UptimeClient client){
    this.client = client;
}
long startTime = -1;

@Override
public void channelActive(ChannelHandlerContext ctx) {
    try {
        if (startTime < 0) {
            startTime = System.currentTimeMillis();
        }
        println("Connected to: " + ctx.channel().remoteAddress());
        new QuoteOfTheMomentClient(null).run();
    } catch (Exception ex) {
        Logger.getLogger(UptimeClientHandler.class.getName()).log(Level.SEVERE, null, ex);
    }
}

@Override
public void channelRead0(ChannelHandlerContext ctx, Object msg) throws Exception {
}

@Override
public void userEventTriggered(ChannelHandlerContext ctx, Object evt) {
    if (!(evt instanceof IdleStateEvent)) {
        return;
    }

    IdleStateEvent e = (IdleStateEvent) evt;
    if (e.state() == IdleState.READER_IDLE) {
        // The connection was OK but there was no traffic for last period.
        println("Disconnecting due to no inbound traffic");
        ctx.close();
    }
}

@Override
public void channelInactive(final ChannelHandlerContext ctx) {
    println("Disconnected from: " + ctx.channel().remoteAddress());
}

@Override
public void channelUnregistered(final ChannelHandlerContext ctx) throws Exception {
    println("Sleeping for: " + UptimeClient.RECONNECT_DELAY + 's');

    final EventLoop loop = ctx.channel().eventLoop();
    loop.schedule(new Runnable() {
        @Override
        public void run() {
            println("Reconnecting to: " + UptimeClient.HOST + ':' + UptimeClient.PORT);
            client.connect(client.configureBootstrap(new Bootstrap(), loop));
        }
    }, UptimeClient.RECONNECT_DELAY, TimeUnit.SECONDS);
}

@Override
public void exceptionCaught(ChannelHandlerContext ctx, Throwable cause) {
    cause.printStackTrace();
    ctx.close();
}

void println(String msg) {
    if (startTime < 0) {
        System.err.format("[SERVER IS DOWN] %s%n", msg);
    } else {
        System.err.format("[UPTIME: %5ds] %s%n", (System.currentTimeMillis() - startTime) / 1000, msg);
    }
    }
}

public final class QuoteOfTheMomentClient {

private ServerData config;
public QuoteOfTheMomentClient(ServerData config){
    this.config = config;
}

public void run() throws Exception {


    EventLoopGroup group = new NioEventLoopGroup();
    try {
        Bootstrap b = new Bootstrap();
        b.group(group)
         .channel(NioDatagramChannel.class)
         .option(ChannelOption.SO_BROADCAST, true)
         .handler(new QuoteOfTheMomentClientHandler());

        Channel ch = b.bind(0).sync().channel();

        ch.writeAndFlush(new DatagramPacket(
                Unpooled.copiedBuffer("QOTM?", CharsetUtil.UTF_8),
                new InetSocketAddress("192.168.2.193", 8193))).sync();

        if (!ch.closeFuture().await(5000)) {
            System.err.println("QOTM request timed out.");
        }
    }
    catch(Exception ex)
    {
        ex.printStackTrace();
    }
    finally {
        group.shutdownGracefully();
    }
    }
}

public class QuoteOfTheMomentClientHandler extends SimpleChannelInboundHandler<DatagramPacket> {

@Override
public void channelRead0(ChannelHandlerContext ctx, DatagramPacket msg) throws Exception {
    String response = msg.content().toString(CharsetUtil.UTF_8);
    if (response.startsWith("QOTM: ")) {
        System.out.println("Quote of the Moment: " + response.substring(6));
        ctx.close();
    }
}

@Override
public void exceptionCaught(ChannelHandlerContext ctx, Throwable cause) {
    cause.printStackTrace();
    ctx.close();
    }
}

推荐答案

如果您的服务器是Windows Server 2008(R2或R2 SP1),则此问题很可能由 Microsoft KB文章#2577795

If your server is Windows Server 2008 (R2 or R2 SP1), this problem is likely described and solved by this stackoverflow answer which refers to Microsoft KB article #2577795

发生此问题是由于辅助功能驱动程序中的竞争状况 对于导致套接字泄漏的WinSock(Afd.sys).随着时间的流逝,问题 如果所有可用的套接字发生症状"一节中描述的 资源耗尽.

This issue occurs because of a race condition in the Ancillary Function Driver for WinSock (Afd.sys) that causes sockets to be leaked. With time, the issue that is described in the "Symptoms" section occurs if all available socket resources are exhausted.


如果您的服务器是Windows Server 2003,则此问题可能由此stackoverflow答案描述并解决,该问题指的是 Microsoft KB文章#196271


If your server is Windows Server 2003, this problem is likely described and solved by this stackoverflow answer which refers to Microsoft KB article #196271

在以下产品中,临时TCP端口的默认最大数目是5000 包含在适用于"部分中.新参数已添加到 这些产品.要增加临时端口的最大数量,请遵循以下步骤 步骤...

The default maximum number of ephemeral TCP ports is 5000 in the products that are included in the "Applies to" section. A new parameter has been added in these products. To increase the maximum number of ephemeral ports, follow these steps...

...这基本上意味着您已经用完了临时端口.

...which basically means that you have run out of ephemeral ports.

这篇关于Java,Netty,TCP和UDP连接集成:没有缓冲区空间可用于UDP连接的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆