什么是使用媒体codeC API来实现音视频同步在基于Android的媒体播放器应用程序的最佳方法是什么? [英] What is the best way to achieve Audio Video Synchronization in Android Based Media Player Application using MediaCodec API?

查看:804
本文介绍了什么是使用媒体codeC API来实现音视频同步在基于Android的媒体播放器应用程序的最佳方法是什么?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我试图使用媒体codeC API在Android中实现一个媒体播放器。

I'm trying to implement a Media Player in android using the MediaCodec API.

我已经创建了三个线程 主题1:要取消排队的输入缓冲器,以获得免费的指数,然后排队音频视频帧各codeC的输入缓冲区

I've created three threads Thread 1 : To de-queue the input buffers to get free indices and then queuing the audio and video frames in respective codec's input buffer

主题2:如果重新排队音频 codeC的输出缓冲器,并使用使其 AudioTrack 类写方法

Thread 2 : To de-queue the audio codec's output buffer and render it using AudioTrack class' write method

主题3:要取消排队的视频 codeC的输出缓冲器,并使用 releaseBuffer 方法使其

Thread 3 : To de-queue the video codec's output buffer and render it using releaseBuffer method

我现在面临一个很大的问题,在实现音频视频帧之间的同步。我从来没有降音频帧和渲染视频帧之前我检查了德codeD帧是否迟到被越来越比3omsecs,如果他们是我丢弃帧,如果他们超过10ms年初,我不渲染帧。

I'm facing a lot of problem in achieving synchronization between audio and video frames. I never drop audio frames and before rendering video frames I check whether the decoded frames are late by more than 3omsecs, if they are I drop the frame, if they are more than 10ms early I don't render the frame.

要找到音频视频我用下面的逻辑

To find the difference between audio and video I use following logic

public long calculateLateByUs(long timeUs) {
        long nowUs = 0;

        if (hasAudio && audioTrack != null) {
            synchronized (audioTrack) {
                if(first_audio_sample && startTimeUs >=0){
                    System.out.println("First video after audio Time Us: " + timeUs );
                    startTimeUs = -1;
                    first_audio_sample = false;
                }

            nowUs = (audioTrack.getPlaybackHeadPosition() * 1000000L) /
                    audioCodec.format.getInteger(MediaFormat.KEY_SAMPLE_RATE);
        }

    } else if(!hasAudio){
        nowUs = System.currentTimeMillis() * 1000;
        startTimeUs = 0;

    }else{
        nowUs = System.currentTimeMillis() * 1000;

    }

    if (startTimeUs == -1) {
        startTimeUs = nowUs - timeUs;
    }
    if(syslog){
        System.out.println("Timing Statistics:");             
        System.out.println("Key Sample Rate :"+ audioCodec.format.getInteger(MediaFormat.KEY_SAMPLE_RATE) +  " nowUs: " + nowUs + " startTimeUs: "+startTimeUs + " timeUs: "+timeUs + " return value :"+(nowUs - (startTimeUs + timeUs)));
    }

    return (nowUs - (startTimeUs + timeUs));
}

timeUs 在视频帧的微秒presentation时间。 nowUs 应该包含在微秒的时间,为音频一直在打。 startTimeUs 音频视频帧之间的初始差而一直保持到永远。

timeUs is the presentation time in micro-seconds of the video frame. nowUs is supposed to contain the duration in micro-seconds for which audio has been playing. startTimeUs is the initial difference between audio and video frames which has to be maintained always.

第一,如果块检查,如果的确存在音频的轨道,它已被初始化并设置 nowUs audiotrack 如果有 音频(第一个else) nowUs 设置为 SYSTEMTIME 和初始间隙设置为零。 startTimeUs 被初始化为零的主要功能。

The first if block checks, if there is indeed an audio track and it has been initialized and sets the value of nowUs by calculating it from audiotrack If there is no audio (first else) nowUs is set to SystemTime and the initial gap is set to zero. startTimeUs is initialized to zero in main function.

if块的同步块用的情况下,第一帧被渲染为音频音频帧以后加入。 first_audio_sample 标志最初设置为true。

The if block in the synchronized block is used in case, first frame to be rendered is audio and audio frame joins later. first_audio_sample flag is initially set to true.

请让我知道,如果有什么不明确。

Please let me know if anything is not clear.

另外,如果你知道一个AV文件的媒体播放器,一直使用视频codeC实现的任何开放源代码的链接,那将是巨大的。

Also if you know of any open source link where media player of an a-v file has been implemented using video codec, that would be great.

推荐答案

如果您正在使用的安卓的最新版本之一,你可以考虑检索 audioTimeStamp AudioTrack 直接。请参阅本<一个href="http://developer.android.com/reference/android/media/AudioTrack.html#getTimestamp%28android.media.AudioTimestamp%29"相对=nofollow>有关详细信息,文档。同样的,你也可以考虑通过的采样速率 %28%29相对=nofollow> getSampleRate

If you are working on one of the latest releases of Android, you can consider retrieving the audioTimeStamp from AudioTrack directly. Please refer to this documentation for more details. Similarly, you could also consider retrieving the sampling rate via getSampleRate.

如果您希望继续使用你的算法,你可以考虑在这个原生的例子比较相似的应用。 SimplePlayer 采用实现了一个播放器的发动机媒体codeC 并有一个的的视音频同步的部分了。请参照code这节< /一>其中执行的同步。我觉得这应该有助于作为一个很好的参考。

If you wish to continue with your algorithm, you could consider a relatively similar implementation in this native example. SimplePlayer implements a player engine by employing MediaCodec and has an a-v sync section too. Please refer to this section of code where the synchronization is performed. I feel this should help as a good reference.

这篇关于什么是使用媒体codeC API来实现音视频同步在基于Android的媒体播放器应用程序的最佳方法是什么?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆