Android MediaCodec向后搜索 [英] Android MediaCodec backward seeking

查看:144
本文介绍了Android MediaCodec向后搜索的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我正在尝试使用MediaCodecMediaExtractor对视频进行精确搜索.通过遵循Grafika的 MoviePlayer ,我已成功实施了前瞻性搜索.但是,我仍然在向后搜索中遇到问题.相关的代码位在这里:

I'm trying to implement precise seeking for video using MediaCodec and MediaExtractor. By following Grafika's MoviePlayer, I've managed to implement the forward seeking. However I'm still having problem with backward seeking. The relevant bit of code is here:

public void seekBackward(long position){
    final int TIMEOUT_USEC = 10000;
    int inputChunk = 0;
    long firstInputTimeNsec = -1;

    boolean outputDone = false;
    boolean inputDone = false;

    mExtractor.seekTo(position, MediaExtractor.SEEK_TO_PREVIOUS_SYNC);
    Log.d("TEST_MEDIA", "sampleTime: " + mExtractor.getSampleTime()/1000 + " -- position: " + position/1000 + " ----- BACKWARD");

    while (mExtractor.getSampleTime() < position && position >= 0) {

        if (VERBOSE) Log.d(TAG, "loop");
        if (mIsStopRequested) {
            Log.d(TAG, "Stop requested");
            return;
        }

        // Feed more data to the decoder.
        if (!inputDone) {
            int inputBufIndex = mDecoder.dequeueInputBuffer(TIMEOUT_USEC);
            if (inputBufIndex >= 0) {
                if (firstInputTimeNsec == -1) {
                    firstInputTimeNsec = System.nanoTime();
                }
                ByteBuffer inputBuf = mDecoderInputBuffers[inputBufIndex];
                // Read the sample data into the ByteBuffer.  This neither respects nor
                // updates inputBuf's position, limit, etc.
                int chunkSize = mExtractor.readSampleData(inputBuf, 0);
                if (chunkSize < 0) {
                    // End of stream -- send empty frame with EOS flag set.
                    mDecoder.queueInputBuffer(inputBufIndex, 0, 0, 0L,
                            MediaCodec.BUFFER_FLAG_END_OF_STREAM);
                    inputDone = true;
                    if (VERBOSE) Log.d(TAG, "sent input EOS");
                } else {
                    if (mExtractor.getSampleTrackIndex() != mTrackIndex) {
                        Log.w(TAG, "WEIRD: got sample from track " +
                                mExtractor.getSampleTrackIndex() + ", expected " + mTrackIndex);
                    }
                    long presentationTimeUs = mExtractor.getSampleTime();
                    mDecoder.queueInputBuffer(inputBufIndex, 0, chunkSize,
                            presentationTimeUs, 0 /*flags*/);
                    if (VERBOSE) {
                        Log.d(TAG, "submitted frame " + inputChunk + " to dec, size=" + chunkSize);
                    }
                    inputChunk++;
                    mExtractor.advance();
                }
            } else {
                if (VERBOSE) Log.d(TAG, "input buffer not available");
            }
        }

        if (!outputDone) {
            int decoderStatus = mDecoder.dequeueOutputBuffer(mBufferInfo, TIMEOUT_USEC);
            if (decoderStatus == MediaCodec.INFO_TRY_AGAIN_LATER) {
                // no output available yet
                if (VERBOSE) Log.d(TAG, "no output from decoder available");
            } else if (decoderStatus == MediaCodec.INFO_OUTPUT_BUFFERS_CHANGED) {
                // not important for us, since we're using Surface
                if (VERBOSE) Log.d(TAG, "decoder output buffers changed");
            } else if (decoderStatus == MediaCodec.INFO_OUTPUT_FORMAT_CHANGED) {
                MediaFormat newFormat = mDecoder.getOutputFormat();
                if (VERBOSE) Log.d(TAG, "decoder output format changed: " + newFormat);
            } else if (decoderStatus < 0) {
                throw new RuntimeException(
                        "unexpected result from decoder.dequeueOutputBuffer: " +
                                decoderStatus);
            } else { // decoderStatus >= 0
                if (firstInputTimeNsec != 0) {
                    // Log the delay from the first buffer of input to the first buffer
                    // of output.
                    long nowNsec = System.nanoTime();
                    Log.d(TAG, "startup lag " + ((nowNsec-firstInputTimeNsec) / 1000000.0) + " ms");
                    firstInputTimeNsec = 0;
                }
                boolean doLoop = false;
                if (VERBOSE) Log.d(TAG, "surface decoder given buffer " + decoderStatus +
                        " (size=" + mBufferInfo.size + ")");
                if ((mBufferInfo.flags & MediaCodec.BUFFER_FLAG_END_OF_STREAM) != 0) {
                    if (VERBOSE) Log.d(TAG, "output EOS");
                    if (mLoop) {
                        doLoop = true;
                    } else {
                        outputDone = true;
                    }
                }

                boolean doRender = (mBufferInfo.size != 0);

                // As soon as we call releaseOutputBuffer, the buffer will be forwarded
                // to SurfaceTexture to convert to a texture.  We can't control when it
                // appears on-screen, but we can manage the pace at which we release
                // the buffers.
                if (doRender && mFrameCallback != null) {
                    mFrameCallback.preRender(mBufferInfo.presentationTimeUs);
                }
                mDecoder.releaseOutputBuffer(decoderStatus, doRender);
                doRender = false;
                if (doRender && mFrameCallback != null) {
                    mFrameCallback.postRender();
                }

                if (doLoop) {
                    Log.d(TAG, "Reached EOS, looping");
                    mExtractor.seekTo(0, MediaExtractor.SEEK_TO_CLOSEST_SYNC);
                    inputDone = false;
                    mDecoder.flush();    // reset decoder state
                    mFrameCallback.loopReset();
                }
            }
        }
    }
}

基本上,它与MoviePlayer的doExtract方法相同.我只是添加了一点修改,以返回到先前的关键帧,而不是向前解码到所需的位置.我还遵循了fadden的此处的建议,但收效甚微.

Basically, it's the same as MoviePlayer's doExtract method. I just add a slight modification to seek back to the previous keyframe than decode forward to the position I want. I've also follow fadden's suggestion here with little success.

据我所知,另一个问题是,ExoPlayer建立在MediaCodec的基础上,那么它为什么可以播放iOS录制的视频而MoviePlayer的纯实现MediaCodec却不能呢?

Another side question, to my understanding, ExoPlayer is built upon MediaCodec, then how come it can play videos recorded by iOS just fine while MoviePlayer's pure implementation of MediaCodec can't?

推荐答案

好,所以这就是我解决问题的方法,基本上我误解了fadden对render标志的评论.问题不在于解码,而在于仅显示最接近搜寻位置的最后一个缓冲区.这是我的方法:

Ok, so this is how I solve my problem, basically I misunderstood fadden's comment on the render flag. The problem is not with the decoding but instead only displaying the last buffer that is closest to the seeking position. Here is how I do it:

if (Math.abs(position - mExtractor.getSampleTime()) < 10000) {
   mDecoder.releaseOutputBuffer(decoderStatus, true);
} else {
   mDecoder.releaseOutputBuffer(decoderStatus, false);
}

这是解决该问题的一种非常棘手的方法.优雅的方法应该是保存最后一个输出缓冲区,并在while循环外显示它,但我真的不知道如何访问输出缓冲区,以便可以将其保存到临时缓冲区中.

This is quite a hackish way to go about this. The elegant way should be saving the last output buffer and display it outside the while loop but I don't really know how to access the output buffer so that I can save it to a temporary one.

这是一种比较简单的方法.基本上,我们只需要计算关键帧和搜索位置之间的总帧数,然后只需要显示最接近搜索位置的1或2个帧即可.像这样:

This is a bit less hackish way to do this. Basically, we only need to calculate the total frames in between the keyframe and the seeking position and then we just need to display 1 or 2 frames closest to the seeking position. Something like this:

    mExtractor.seekTo(position, MediaExtractor.SEEK_TO_PREVIOUS_SYNC);
    int stopPosition = getStopPosition(mExtractor.getSampleTime(), position);
    int count = 0;

    while (mExtractor.getSampleTime() < position && mExtractor.getSampleTime() != -1 && position >= 0) {
    ....

        if(stopPosition - count < 2) { //just to make sure we will get something (1 frame sooner), see getStopPosition comment
           mDecoder.releaseOutputBuffer(decoderStatus, true);
        }else{
           mDecoder.releaseOutputBuffer(decoderStatus, false);
        }
        count++;
     ...
    }

/**
 * Calculate how many frame in between the key frame and the seeking position
 * so that we can determine how many while loop will be execute, then we can just
 * need to stop the loop 2 or 3 frames sooner to ensure we can get something.
 * */
private int getStopPosition(long start, long end){
    long delta = end - start;
    float framePerMicroSecond = mFPS / 1000000;

    return (int)(delta * framePerMicroSecond);
}

这篇关于Android MediaCodec向后搜索的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆