Android MediaExtractor 和 mp3 流 [英] Android MediaExtractor and mp3 stream

查看:45
本文介绍了Android MediaExtractor 和 mp3 流的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我正在尝试使用 MediaExtractor/MediaCodec 播放 mp3 流.由于延迟和长缓冲区大小,MediaPlayer 是不可能的.

I am trying to play an mp3 stream using MediaExtractor/MediaCodec. MediaPlayer is out of the question due to latency and long buffer size.

我发现的唯一示例代码是:http://dpsm.wordpress.com/category/android/

The only sample code i have found is this: http://dpsm.wordpress.com/category/android/

代码示例只是局部 (?) 并且使用文件而不是流.

The code samples are only parcial (?) and use a File instead of a stream.

我一直在尝试调整此示例以播放音频流,但我无法理解它应该如何工作.像往常一样的 Android 文档没有帮助.

I have been trying to adapt this example to play an Audio Stream but i can't get my head around how this is supposed to work. The Android documentation as usual is no help.

我知道首先我们获取有关流的信息,大概是使用此信息设置 AudioTrack(代码示例确实包括 AudioTrack 初始化?),然后打开输入缓冲区和输出缓冲区.

I understand that first we get information about the stream, presumably setup the AudioTrack with this information ( code sample does include AudioTrack initialization ?) and then open an input buffer and output buffer.

我为此重新创建了代码,我可以猜到丢失的部分,但是没有音频输出.

I have recreated code for this, with what i can guess would be the missing parts, but no audio comes out of this.

有人能指出我正确的方向来理解这应该如何工作吗?

Can someone point me in the right direction to understand how this is supposed to work?

public final String LOG_TAG = "mediadecoderexample";
private static int TIMEOUT_US = -1;
MediaCodec codec;
MediaExtractor extractor;

MediaFormat format;
ByteBuffer[] codecInputBuffers;
ByteBuffer[] codecOutputBuffers;
Boolean sawInputEOS = false;
Boolean sawOutputEOS = false;
AudioTrack mAudioTrack;
BufferInfo info;

@Override
protected void onCreate(Bundle savedInstanceState) {
    super.onCreate(savedInstanceState);
    setContentView(R.layout.activity_main);

    String url = "http://82.201.100.9:8000/RADIO538_WEB_MP3";
    extractor = new MediaExtractor();

    try {
        extractor.setDataSource(url);
    } catch (IOException e) {
    }

    format = extractor.getTrackFormat(0);
    String mime = format.getString(MediaFormat.KEY_MIME);
    int sampleRate = format.getInteger(MediaFormat.KEY_SAMPLE_RATE);

    Log.i(LOG_TAG, "===========================");
    Log.i(LOG_TAG, "url "+url);
    Log.i(LOG_TAG, "mime type : "+mime);
    Log.i(LOG_TAG, "sample rate : "+sampleRate);
    Log.i(LOG_TAG, "===========================");

    codec = MediaCodec.createDecoderByType(mime);
    codec.configure(format, null , null , 0);
    codec.start();

    codecInputBuffers = codec.getInputBuffers();
    codecOutputBuffers = codec.getOutputBuffers();

    extractor.selectTrack(0); 

    mAudioTrack = new AudioTrack(
            AudioManager.STREAM_MUSIC, 
            sampleRate, 
            AudioFormat.CHANNEL_OUT_STEREO, 
            AudioFormat.ENCODING_PCM_16BIT, 
            AudioTrack.getMinBufferSize (
                    sampleRate, 
                    AudioFormat.CHANNEL_OUT_STEREO, 
                    AudioFormat.ENCODING_PCM_16BIT
                    ), 
            AudioTrack.MODE_STREAM
            );

    info = new BufferInfo();


    input();
    output();


}

private void output()
{
    final int res = codec.dequeueOutputBuffer(info, TIMEOUT_US);
    if (res >= 0) {
        int outputBufIndex = res;
        ByteBuffer buf = codecOutputBuffers[outputBufIndex];

        final byte[] chunk = new byte[info.size];
        buf.get(chunk); // Read the buffer all at once
        buf.clear(); // ** MUST DO!!! OTHERWISE THE NEXT TIME YOU GET THIS SAME BUFFER BAD THINGS WILL HAPPEN

        if (chunk.length > 0) {
            mAudioTrack.write(chunk, 0, chunk.length);
        }
        codec.releaseOutputBuffer(outputBufIndex, false /* render */);

        if ((info.flags & MediaCodec.BUFFER_FLAG_END_OF_STREAM) != 0) {
            sawOutputEOS = true;
        }
    } else if (res == MediaCodec.INFO_OUTPUT_BUFFERS_CHANGED) {
        codecOutputBuffers = codec.getOutputBuffers();
    } else if (res == MediaCodec.INFO_OUTPUT_FORMAT_CHANGED) {
        final MediaFormat oformat = codec.getOutputFormat();
        Log.d(LOG_TAG, "Output format has changed to " + oformat);
        mAudioTrack.setPlaybackRate(oformat.getInteger(MediaFormat.KEY_SAMPLE_RATE));
    }

}

private void input()
{
    Log.i(LOG_TAG, "inputLoop()");
    int inputBufIndex = codec.dequeueInputBuffer(TIMEOUT_US);
    Log.i(LOG_TAG, "inputBufIndex : "+inputBufIndex);

    if (inputBufIndex >= 0) {   
        ByteBuffer dstBuf = codecInputBuffers[inputBufIndex];

        int sampleSize = extractor.readSampleData(dstBuf, 0);
        Log.i(LOG_TAG, "sampleSize : "+sampleSize);
        long presentationTimeUs = 0;
        if (sampleSize < 0) {
            Log.i(LOG_TAG, "Saw input end of stream!");
            sawInputEOS = true;
            sampleSize = 0;
        } else {
            presentationTimeUs = extractor.getSampleTime();
            Log.i(LOG_TAG, "presentationTimeUs "+presentationTimeUs);
        }

        codec.queueInputBuffer(inputBufIndex,
                               0, //offset
                               sampleSize,
                               presentationTimeUs,
                               sawInputEOS ? MediaCodec.BUFFER_FLAG_END_OF_STREAM : 0);
        if (!sawInputEOS) {
            Log.i(LOG_TAG, "extractor.advance()");
            extractor.advance();

        }
     }

}
}

添加 logcat 输出以获得额外的想法.

adding logcat output for extra ideas.

03-10 16:47:54.115: I/mediadecoderexample(24643): ===========================
03-10 16:47:54.115: I/mediadecoderexample(24643): url ....
03-10 16:47:54.115: I/mediadecoderexample(24643): mime type : audio/mpeg
03-10 16:47:54.115: I/mediadecoderexample(24643): sample rate : 32000
03-10 16:47:54.115: I/mediadecoderexample(24643): ===========================
03-10 16:47:54.120: I/OMXClient(24643): Using client-side OMX mux.
03-10 16:47:54.150: I/Reverb(24643):  getpid() 24643, IPCThreadState::self()->getCallingPid() 24643
03-10 16:47:54.150: I/mediadecoderexample(24643): inputLoop()
03-10 16:47:54.155: I/mediadecoderexample(24643): inputBufIndex : 0
03-10 16:47:54.155: I/mediadecoderexample(24643): sampleSize : 432
03-10 16:47:54.155: I/mediadecoderexample(24643): presentationTimeUs 0
03-10 16:47:54.155: I/mediadecoderexample(24643): extractor.advance()
03-10 16:47:59.085: D/HTTPBase(24643): [2] Network BandWidth = 187 Kbps
03-10 16:47:59.085: D/NuCachedSource2(24643): Remaining (64K), HighWaterThreshold (20480K)
03-10 16:48:04.635: D/HTTPBase(24643): [3] Network BandWidth = 141 Kbps
03-10 16:48:04.635: D/NuCachedSource2(24643): Remaining (128K), HighWaterThreshold (20480K)
03-10 16:48:09.930: D/HTTPBase(24643): [4] Network BandWidth = 127 Kbps
03-10 16:48:09.930: D/NuCachedSource2(24643): Remaining (192K), HighWaterThreshold (20480K)
03-10 16:48:15.255: D/HTTPBase(24643): [5] Network BandWidth = 120 Kbps
03-10 16:48:15.255: D/NuCachedSource2(24643): Remaining (256K), HighWaterThreshold (20480K)
03-10 16:48:20.775: D/HTTPBase(24643): [6] Network BandWidth = 115 Kbps
03-10 16:48:20.775: D/NuCachedSource2(24643): Remaining (320K), HighWaterThreshold (20480K)
03-10 16:48:26.510: D/HTTPBase(24643): [7] Network BandWidth = 111 Kbps
03-10 16:48:26.510: D/NuCachedSource2(24643): Remaining (384K), HighWaterThreshold (20480K)
03-10 16:48:31.740: D/HTTPBase(24643): [8] Network BandWidth = 109 Kbps
03-10 16:48:31.740: D/NuCachedSource2(24643): Remaining (448K), HighWaterThreshold (20480K)
03-10 16:48:37.260: D/HTTPBase(24643): [9] Network BandWidth = 107 Kbps
03-10 16:48:37.260: D/NuCachedSource2(24643): Remaining (512K), HighWaterThreshold (20480K)
03-10 16:48:42.620: D/HTTPBase(24643): [10] Network BandWidth = 106 Kbps
03-10 16:48:42.620: D/NuCachedSource2(24643): Remaining (576K), HighWaterThreshold (20480K)
03-10 16:48:48.295: D/HTTPBase(24643): [11] Network BandWidth = 105 Kbps
03-10 16:48:48.295: D/NuCachedSource2(24643): Remaining (640K), HighWaterThreshold (20480K)
03-10 16:48:53.735: D/HTTPBase(24643): [12] Network BandWidth = 104 Kbps
03-10 16:48:53.735: D/NuCachedSource2(24643): Remaining (704K), HighWaterThreshold (20480K)
03-10 16:48:59.115: D/HTTPBase(24643): [13] Network BandWidth = 103 Kbps
03-10 16:48:59.115: D/NuCachedSource2(24643): Remaining (768K), HighWaterThreshold (20480K)
03-10 16:49:04.480: D/HTTPBase(24643): [14] Network BandWidth = 103 Kbps
03-10 16:49:04.480: D/NuCachedSource2(24643): Remaining (832K), HighWaterThreshold (20480K)
03-10 16:49:09.955: D/HTTPBase(24643): [15] Network BandWidth = 102 Kbps

推荐答案

onCreate() 中的代码表明您对 MediaCodec 的工作方式存在误解.您的代码目前是:

The code in onCreate() suggests you have a misconception about how MediaCodec works. Your code is currently:

onCreate() {
    ...setup...
    input();
    output();
}

MediaCodec 对访问单元进行操作.对于视频,每次调用输入/输出都会为您提供一帧视频.我没有使用过音频,但我的理解是它的行为类似.您不会将整个文件加载到输入缓冲区中,也不会为您播放流;你取一小块文件,把它交给解码器,然后它把解码后的数据(例如 YUV 视频缓冲区或 PCM 音频数据)交还给解码器.然后你可以做任何必要的事情来播放这些数据.

MediaCodec operates on access units. For video, each call to input/output would get you a single frame of video. I haven't worked with audio, but my understanding is that it behaves similarly. You don't get the entire file loaded into an input buffer, and it doesn't play the stream for you; you take one small piece of the file, hand it to the decoder, and it hands back decoded data (e.g. a YUV video buffer or PCM audio data). You then do whatever is necessary to play that data.

因此,您的示例充其量只能解码几分之一秒的音频.您需要在循环中执行提交-输入-获取-输出,并正确处理流结束.您可以在各种 bigflake 示例中看到为视频完成的这项工作.看起来您的代码具有必要的部分.

So your example would, at best, decode a fraction of a second of audio. You need to be doing submit-input-get-output in a loop with proper handling of end-of-stream. You can see this done for video in the various bigflake examples. It looks like your code has the necessary pieces.

您使用的超时值为 -1(无限),因此您将提供一个输入缓冲区并永远等待输出缓冲区.在视频中这是行不通的——我测试过的解码器在产生任何输出之前似乎需要大约四个输入缓冲区——但我又没有处理过音频,所以我不确定这是否预计工作.由于您的代码挂起,我猜不是.将超时更改为(例如)10000 并查看挂起是否消失可能很有用.

You're using a timeout of -1 (infinite), so you're going to supply one buffer of input and wait forever for a buffer of output. In video this wouldn't work -- the decoders I've tested seem to want about four buffers of input before they'll produce any output -- but again I haven't worked with audio, so I'm not sure if this is expected to work. Since your code is hanging I'm guessing it's not. It might be useful to change the timeout to (say) 10000 and see if the hang goes away.

我假设这是一个实验,你不会真的在 onCreate() 中完成所有这些.:-)

I'm assuming this is an experiment and you're not really going to do all this in onCreate(). :-)

这篇关于Android MediaExtractor 和 mp3 流的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆