泄漏: ByteBuf.release() 没有被调用 - 我们如何解决这个问题? [英] LEAK: ByteBuf.release() was not called - how do we solve this?

查看:177
本文介绍了泄漏: ByteBuf.release() 没有被调用 - 我们如何解决这个问题?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我们有一个基于 netty 的网络流量密集型 Java 应用程序/服务器.

We have a netty-based network traffic intensive Java app/server.

附注:我主要支持这个应用程序,我没有构建它所以我不完全了解它.

我们有时会收到如下所示的错误.

We sometimes get this error as shown below.

以前我们曾经在服务器启动 3-4 天后收到此错误.现在我注意到即使在重新启动服务器/应用程序 10-15 分钟后我们也会收到此错误.

Previously we used to get this error after the server has been up for 3-4 days. Now I noticed that we are getting this error even just 10-15 mins after rebooting the server/app.

我不明白这怎么可能.这个错误是否令人担忧,我们该如何解决?我记得过去对同样的错误进行了广泛的研究,当时我什至尝试升级和修补 netty,但没有任何帮助完全解决问题.

I don't understand how this is possible. Is this error something worrying and how can we fix it? I recall doing extensive research in the past on this same error, back then I even tried upgrading and patching netty but nothing helped to fully resolve the issue.

操作系统:Linux
Java版本:1.8
Netty 版本:netty-all-4.1.30.Final.jar

OS: Linux
Java version: 1.8
Netty version: netty-all-4.1.30.Final.jar

这是唯一一行特定于应用的代码,其他一切都在 Netty 中进行.

This is the only line of app-specific code, everything else happens in Netty.

com.company.japp.protocol.http.decoders.ConditionalHttpChunkAggregator.channelRead

这是 Netty 本身的某种错误吗?Netty 升级或任何其他配置调整对这里有帮助吗?

Is this a bug of some sort in Netty itself? Would Netty upgrade or any other configuration tuning help here?

[2020-09-04 08:33:53,072]

ERROR

io.netty.util.ResourceLeakDetector

LEAK: ByteBuf.release() was not called before it's garbage-collected. 

See https://netty.io/wiki/reference-counted-objects.html for more information.

Recent access records: 
Created at:
    io.netty.buffer.AbstractByteBufAllocator.compositeDirectBuffer(AbstractByteBufAllocator.java:221)
    io.netty.buffer.AbstractByteBufAllocator.compositeBuffer(AbstractByteBufAllocator.java:199)
    io.netty.handler.codec.MessageAggregator.decode(MessageAggregator.java:255)
    io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:88)
    com.company.japp.protocol.http.decoders.ConditionalHttpChunkAggregator.channelRead(ConditionalHttpChunkAggregator.java:112)
    io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
    io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
    io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340)
    io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:323)
    io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:297)
    io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
    io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
    io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340)
    io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286)
    io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
    io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
    io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340)
    io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1434)
    io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
    io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
    io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:965)
    io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:163)
    io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:644)
    io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:579)
    io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:496)
    io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:458)
    io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:897)
    io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
    java.lang.Thread.run(Thread.java:748)

这是ConditionalHttpChunkAggregator的代码.

package com.company.japp.protocol.http.decoders;

import com.company.japp.IHttpProxyServer;
import io.netty.channel.ChannelDuplexHandler;
import io.netty.channel.ChannelHandler;
import io.netty.channel.ChannelHandlerContext;
import io.netty.handler.codec.http.HttpHeaders;
import io.netty.handler.codec.http.HttpMessage;
import io.netty.handler.codec.http.HttpObjectAggregator;
import io.netty.handler.codec.http.HttpResponse;
import io.netty.util.internal.logging.InternalLogger;
import io.netty.util.internal.logging.InternalLoggerFactory;

import java.util.HashSet;

@ChannelHandler.Sharable
public class ConditionalHttpChunkAggregator extends HttpObjectAggregator {
    private static final InternalLogger logger = InternalLoggerFactory.getInstance(ConditionalHttpChunkAggregator.class);

    private volatile boolean        sendaschunked;
    private volatile int        maxContentLength;

    private static IHttpProxyServer         iHttpProxyServer;

    public static void initialize(IHttpProxyServer iHttpProxyServer) {
        ConditionalHttpChunkAggregator.iHttpProxyServer = iHttpProxyServer;
    }

    public ConditionalHttpChunkAggregator(int maxContentLength) {
        super(maxContentLength);
        this.maxContentLength = maxContentLength;
        sendaschunked = false;
    }

    @Override
    public void channelRead(ChannelHandlerContext ctx, Object msg){
        if ((msg instanceof HttpResponse)) {
            HttpResponse response = (HttpResponse)msg;
            if ((msg instanceof HttpMessage)) {
                HttpMessage httpmessage= (HttpMessage)msg;

                try  {
                    // If the content length exceeds the threshhold, then send it as chunked
                    // It's too large to process substitutions
                    Long contentlength = 
                            httpmessage.headers().get(HttpHeaders.Names.CONTENT_LENGTH) != null ? 
                            Long.valueOf(httpmessage.headers().get(HttpHeaders.Names.CONTENT_LENGTH)) : -1;
                    if (contentlength >= maxContentLength) {
                        sendaschunked = true;
                    } else {
                    // Check content types
                         HashSet<String> chunkabletypes = iHttpProxyServer.getConfig().getProperty("chunkabletypes");
                        if (!chunkabletypes.isEmpty() && response.headers().contains(HttpHeaders.Names.CONTENT_TYPE)) {
                            String contentType = response.headers().get(HttpHeaders.Names.CONTENT_TYPE).toLowerCase().trim();
                            if (contentType.length()>0) {
                                sendaschunked = chunkabletypes.contains(contentType);
                                if (!sendaschunked) {
                                    for (String chunkabletype: chunkabletypes) {
                                        // Begins with
                                        if (contentType.indexOf(chunkabletype)==0) {
                                            sendaschunked = true;
                                            break;
                                        }
                                    }
                                }
                            }
                        }
                    }
                    if (sendaschunked) {
                        ctx.fireChannelRead(msg);
                        return;
                    }
                }
                catch(Exception ex) {
                    logger.error("error determining chunkable viability", ex);
                }
            }
        }
        if (sendaschunked) {
            ctx.fireChannelRead(msg);
            return;
        }

        try {
            super.channelRead(ctx, msg);
            
        } catch (Exception e) {
            logger.error("error determining chunkable viability", e);
            e.printStackTrace();
        }
    }
}

这是可分块类型属性的值:

And this is the value for the chunkable types property:

video/x-ms-wvx,video/x-flv,application/x-shockwave-flash,video/quicktime,video/,audio/

我认为这是 netty 4.1.30 中 io.netty.handler.codec.MessageAggregator 中第 255 行附近的错误.
似乎这个 CompositeByteBuf 已分配但未释放.
我说的对吗?
我希望得到一些权威的答案来确认或拒绝这个想法.

I think this is bug in netty 4.1.30 around this line 255 in io.netty.handler.codec.MessageAggregator.
Seems this CompositeByteBuf is allocated but not released.
Am I right?
I am hoping for some authoritative answer to either confirm or reject this idea.

        // A streamed message - initialize the cumulative buffer, and wait for incoming chunks.
        CompositeByteBuf content = ctx.alloc().compositeBuffer(maxCumulationBufferComponents); // LINE 255 
        if (m instanceof ByteBufHolder) {
            appendPartialContent(content, ((ByteBufHolder) m).content());
        }
        currentMessage = beginAggregation(m, content);

推荐答案

你需要释放你分配的Bytebuf.这不是 Netty 错误.

You need to release Bytebuf allocated by you. It's not a Netty bug.

ByteBuf 位于 com.company.japp.protocol.http.decoders.ConditionalHttpChunkAggregator.channelRead(ConditionalHttpChunkAggregator.java:112)

ByteBuf is at com.company.japp.protocol.http.decoders.ConditionalHttpChunkAggregator.channelRead(ConditionalHttpChunkAggregator.java:112)

这篇关于泄漏: ByteBuf.release() 没有被调用 - 我们如何解决这个问题?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆