Java、Netty、TCP 和 UDP 连接集成:没有可用于 UDP 连接的缓冲区空间 [英] Java, Netty, TCP and UDP connection integration : No buffer space available for UDP connection

查看:29
本文介绍了Java、Netty、TCP 和 UDP 连接集成:没有可用于 UDP 连接的缓冲区空间的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我有同时使用 TCP 和 UDP 协议的应用程序.主要假设是客户端通过 TCP 协议连接到服务器,并且在建立连接时,正在发送 UDP 数据报.我必须支持连接到服务器的两种方案:- 客户端在服务器运行时连接- 客户端在服务器关闭时连接并重试连接直到服务器再次启动

I have application which uses both TCP and UDP protocols. Main assumption is that the client connects to server via TCP protocol and when connection is established, UDP datagrams are being send. I have to support two scenarios of connecting to server: - client connects when server is running - client connects when server is down and retries connection until server starts again

对于第一个场景,一切都很好:我同时使用了两个连接.问题在于第二种情况.当客户端多次尝试通过 TCP 连接并最终连接时,UDP 连接函数抛出异常:

For the first scenario everything works pretty fine: I got working both connections. The problem is with second scenario. When client tries few times to connect via TCP and finally connects, the UDP connection function throws an exception:

java.net.SocketException: No buffer space available (maximum connections reached?): bind
at sun.nio.ch.Net.bind0(Native Method)
at sun.nio.ch.Net.bind(Net.java:344)
at sun.nio.ch.DatagramChannelImpl.bind(DatagramChannelImpl.java:684)
at sun.nio.ch.DatagramSocketAdaptor.bind(DatagramSocketAdaptor.java:91)
at io.netty.channel.socket.nio.NioDatagramChannel.doBind(NioDatagramChannel.java:192)
at io.netty.channel.AbstractChannel$AbstractUnsafe.bind(AbstractChannel.java:484)
at io.netty.channel.DefaultChannelPipeline$HeadContext.bind(DefaultChannelPipeline.java:1080)
at io.netty.channel.AbstractChannelHandlerContext.invokeBind(AbstractChannelHandlerContext.java:430)
at io.netty.channel.AbstractChannelHandlerContext.bind(AbstractChannelHandlerContext.java:415)
at io.netty.channel.DefaultChannelPipeline.bind(DefaultChannelPipeline.java:903)
at io.netty.channel.AbstractChannel.bind(AbstractChannel.java:197)
at io.netty.bootstrap.AbstractBootstrap$2.run(AbstractBootstrap.java:350)
at io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:380)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:357)
at io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:116)
at io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:137)
at java.lang.Thread.run(Thread.java:722)

当我重新启动客户端应用程序而不对服务器做任何事情时,客户端会遇到任何问题.

When I restart client application without doing anything with server, client will connect with any problems.

什么会导致问题?

在下面我附上了类的源代码.所有源代码都来自放置在官方 Netty 项目页面中的示例.我唯一改变的是我用非静态变量和函数替换了静态变量和函数.这是因为将来我需要很多 TCP-UDP 连接到多个服务器.

In below I attach source code of classes. All source code comes from examples placed in official Netty project page. The only thing which I have midified is that I replaced static variables and functions with non-static ones. It was caused that in future I will need many TCP-UDP connections to multiple servers.

public final class UptimeClient {
static final String HOST = System.getProperty("host", "192.168.2.193");
static final int PORT = Integer.parseInt(System.getProperty("port", "2011"));
static final int RECONNECT_DELAY = Integer.parseInt(System.getProperty("reconnectDelay", "5"));
static final int READ_TIMEOUT = Integer.parseInt(System.getProperty("readTimeout", "10"));

private static UptimeClientHandler handler;

public void runClient() throws Exception {
    configureBootstrap(new Bootstrap()).connect();
}

private Bootstrap configureBootstrap(Bootstrap b) {
    return configureBootstrap(b, new NioEventLoopGroup());
}

@Override
protected Object clone() throws CloneNotSupportedException {
    return super.clone(); //To change body of generated methods, choose Tools | Templates.
}

Bootstrap configureBootstrap(Bootstrap b, EventLoopGroup g) {
    if(handler == null){
            handler = new UptimeClientHandler(this);
    }
    b.group(g)
     .channel(NioSocketChannel.class)
     .remoteAddress(HOST, PORT)
     .handler(new ChannelInitializer<SocketChannel>() {
        @Override
        public void initChannel(SocketChannel ch) throws Exception {
            ch.pipeline().addLast(new IdleStateHandler(READ_TIMEOUT, 0, 0), handler);
        }
     });

    return b;
}

void connect(Bootstrap b) {
    b.connect().addListener(new ChannelFutureListener() {
        @Override
        public void operationComplete(ChannelFuture future) throws Exception {
            if (future.cause() != null) {
                handler.startTime = -1;
                handler.println("Failed to connect: " + future.cause());
            }
        }
    });
}
}


@Sharable
public class UptimeClientHandler extends SimpleChannelInboundHandler<Object> {
UptimeClient client;
public UptimeClientHandler(UptimeClient client){
    this.client = client;
}
long startTime = -1;

@Override
public void channelActive(ChannelHandlerContext ctx) {
    try {
        if (startTime < 0) {
            startTime = System.currentTimeMillis();
        }
        println("Connected to: " + ctx.channel().remoteAddress());
        new QuoteOfTheMomentClient(null).run();
    } catch (Exception ex) {
        Logger.getLogger(UptimeClientHandler.class.getName()).log(Level.SEVERE, null, ex);
    }
}

@Override
public void channelRead0(ChannelHandlerContext ctx, Object msg) throws Exception {
}

@Override
public void userEventTriggered(ChannelHandlerContext ctx, Object evt) {
    if (!(evt instanceof IdleStateEvent)) {
        return;
    }

    IdleStateEvent e = (IdleStateEvent) evt;
    if (e.state() == IdleState.READER_IDLE) {
        // The connection was OK but there was no traffic for last period.
        println("Disconnecting due to no inbound traffic");
        ctx.close();
    }
}

@Override
public void channelInactive(final ChannelHandlerContext ctx) {
    println("Disconnected from: " + ctx.channel().remoteAddress());
}

@Override
public void channelUnregistered(final ChannelHandlerContext ctx) throws Exception {
    println("Sleeping for: " + UptimeClient.RECONNECT_DELAY + 's');

    final EventLoop loop = ctx.channel().eventLoop();
    loop.schedule(new Runnable() {
        @Override
        public void run() {
            println("Reconnecting to: " + UptimeClient.HOST + ':' + UptimeClient.PORT);
            client.connect(client.configureBootstrap(new Bootstrap(), loop));
        }
    }, UptimeClient.RECONNECT_DELAY, TimeUnit.SECONDS);
}

@Override
public void exceptionCaught(ChannelHandlerContext ctx, Throwable cause) {
    cause.printStackTrace();
    ctx.close();
}

void println(String msg) {
    if (startTime < 0) {
        System.err.format("[SERVER IS DOWN] %s%n", msg);
    } else {
        System.err.format("[UPTIME: %5ds] %s%n", (System.currentTimeMillis() - startTime) / 1000, msg);
    }
    }
}

public final class QuoteOfTheMomentClient {

private ServerData config;
public QuoteOfTheMomentClient(ServerData config){
    this.config = config;
}

public void run() throws Exception {


    EventLoopGroup group = new NioEventLoopGroup();
    try {
        Bootstrap b = new Bootstrap();
        b.group(group)
         .channel(NioDatagramChannel.class)
         .option(ChannelOption.SO_BROADCAST, true)
         .handler(new QuoteOfTheMomentClientHandler());

        Channel ch = b.bind(0).sync().channel();

        ch.writeAndFlush(new DatagramPacket(
                Unpooled.copiedBuffer("QOTM?", CharsetUtil.UTF_8),
                new InetSocketAddress("192.168.2.193", 8193))).sync();

        if (!ch.closeFuture().await(5000)) {
            System.err.println("QOTM request timed out.");
        }
    }
    catch(Exception ex)
    {
        ex.printStackTrace();
    }
    finally {
        group.shutdownGracefully();
    }
    }
}

public class QuoteOfTheMomentClientHandler extends SimpleChannelInboundHandler<DatagramPacket> {

@Override
public void channelRead0(ChannelHandlerContext ctx, DatagramPacket msg) throws Exception {
    String response = msg.content().toString(CharsetUtil.UTF_8);
    if (response.startsWith("QOTM: ")) {
        System.out.println("Quote of the Moment: " + response.substring(6));
        ctx.close();
    }
}

@Override
public void exceptionCaught(ChannelHandlerContext ctx, Throwable cause) {
    cause.printStackTrace();
    ctx.close();
    }
}

推荐答案

如果您的服务器是 Windows Server 2008(R2 或 R2 SP1),这个问题很可能由 此 stackoverflow 答案 引用了 Microsoft KB 文章#2577795

If your server is Windows Server 2008 (R2 or R2 SP1), this problem is likely described and solved by this stackoverflow answer which refers to Microsoft KB article #2577795

出现此问题的原因是辅助功能驱动程序中的竞争条件用于导致套接字泄漏的 WinSock (Afd.sys).随着时间的推移,问题如果所有可用的套接字都出现在症状"部分中描述的资源耗尽.

This issue occurs because of a race condition in the Ancillary Function Driver for WinSock (Afd.sys) that causes sockets to be leaked. With time, the issue that is described in the "Symptoms" section occurs if all available socket resources are exhausted.

<小时>

如果您的服务器是 Windows Server 2003,这个问题很可能由this stackoverflow answer来描述和解决,它指的是Microsoft KB 文章 #196271


If your server is Windows Server 2003, this problem is likely described and solved by this stackoverflow answer which refers to Microsoft KB article #196271

默认最大临时 TCP 端口数是 5000包含在适用于"部分中.添加了一个新参数这些产品.要增加临时端口的最大数量,请遵循以下步骤步骤...

The default maximum number of ephemeral TCP ports is 5000 in the products that are included in the "Applies to" section. A new parameter has been added in these products. To increase the maximum number of ephemeral ports, follow these steps...

...这基本上意味着你的临时端口用完了.

...which basically means that you have run out of ephemeral ports.

这篇关于Java、Netty、TCP 和 UDP 连接集成:没有可用于 UDP 连接的缓冲区空间的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆