Android的MediaExtractor和MP3流 [英] Android MediaExtractor and mp3 stream

查看:969
本文介绍了Android的MediaExtractor和MP3流的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我尝试播放使用MediaExtractor /媒体codeC的MP3流。 MediaPlayer正在走出因等待时间长的缓冲区大小的问题。

唯一的样品codeI发现是这样的:的http:// DPSM .word press.com /分类/安卓/

在code样品只parcial(?),并使用流的文件来代替。

我一直在努力适应这个例子中播放音频流,但我不能让我的头,围绕这是如何工作的。而Android的文件像往常一样没有任何帮助。

据我所知,我们首先获取流,presumably设置信息的AudioTrack使用此信息(code样品也包括AudioTrack初始化?),然后打开一个输入缓冲和输出缓冲。

我已经重新code对于这一点,有什么我可以猜到会是缺少的部分,但没有音频就是这样来的。

有人点我的方向是正确理解这是如何工作的?

 公共最后弦乐LOG_TAG =mediade coderexample;
私有静态诠释TIMEOUT_US = -1;
媒体codeC codeC;
MediaExtractor提取;

MediaFormat格式;
ByteBuffer的[] codecInputBuffers;
ByteBuffer的[] codecOutputBuffers;
布尔sawInputEOS = FALSE;
布尔sawOutputEOS = FALSE;
AudioTrack mAudioTrack;
BufferInfo信息;

@覆盖
保护无效的onCreate(包savedInstanceState){
    super.onCreate(savedInstanceState);
    的setContentView(R.layout.activity_main);

    字符串URL =htt​​p://82.201.100.9:8000/RADIO538_WEB_MP3;
    提取=新MediaExtractor();

    尝试 {
        extractor.setDataSource(URL);
    }赶上(IOException异常E){
    }

    格式= extractor.getTrackFormat(0);
    字符串哑剧= format.getString(MediaFormat.KEY_MIME);
    INT采样率= format.getInteger(MediaFormat.KEY_SAMPLE_RATE);

    Log.i(LOG_TAG===========================);
    Log.i(LOG_TAG,URL+网址);
    Log.i(LOG_TAG,MIME类型:+ MIME);
    Log.i(LOG_TAG,采样率:+采样率);
    Log.i(LOG_TAG===========================);

    codeC =媒体codec.createDe coderByType(MIME);
    codec.configure(格式,NULL,NULL,0);
    codec.start();

    codecInputBuffers = codec.getInputBuffers();
    codecOutputBuffers = codec.getOutputBuffers();

    extractor.selectTrack(0);

    mAudioTrack =新AudioTrack(
            AudioManager.STREAM_MUSIC,
            采样率,
            AudioFormat.CHANNEL_OUT_STEREO,
            AudioFormat.ENCODING_PCM_16BIT,
            AudioTrack.getMinBufferSize(
                    采样率,
                    AudioFormat.CHANNEL_OUT_STEREO,
                    AudioFormat.ENCODING_PCM_16BIT
                    ),
            AudioTrack.MODE_STREAM
            );

    信息=新BufferInfo();


    输入();
    输出();


}

私人无效输出()
{
    最终诠释解析度= codec.dequeueOutputBuffer(资讯,TIMEOUT_US);
    如果(RES> = 0){
        INT outputBufIndex =资源;
        ByteBuffer的BUF = codecOutputBuffers [outputBufIndex]

        最后一个字节[]块=新的字节[info.size]
        buf.get(块); //读取缓冲区一次全部
        buf.clear(); // ** 必须做!!!否则,下一次你得到这个相同的缓冲区不好的事情会发生

        如果(chunk.length大于0){
            mAudioTrack.write(组块,0,chunk.length);
        }
        codec.releaseOutputBuffer(outputBufIndex,假/ *渲染* /);

        如果((info.flags&安培;媒体codec.BUFFER_FLAG_END_OF_STREAM)= 0){
            sawOutputEOS = TRUE;
        }
    }否则,如果(RES ==媒体codec.INFO_OUTPUT_BUFFERS_CHANGED){
        codecOutputBuffers = codec.getOutputBuffers();
    }否则,如果(RES ==媒体codec.INFO_OUTPUT_FORMAT_CHANGED){
        最后MediaFormat oformat = codec.getOutputFormat();
        Log.d(LOG_TAG,输出格式已变为+ oformat);
        mAudioTrack.setPlaybackRate(oformat.getInteger(MediaFormat.KEY_SAMPLE_RATE));
    }

}

私人无效输入()
{
    Log.i(LOG_TAG,inputLoop());
    INT inputBufIndex = codec.dequeueInputBuffer(TIMEOUT_US);
    Log.i(LOG_TAGinputBufIndex:+ inputBufIndex);

    如果(inputBufIndex> = 0){
        ByteBuffer的dstBuf = codecInputBuffers [inputBufIndex]

        INT的采样大小= extractor.readSampleData(dstBuf,0);
        Log.i(LOG_TAG,采样大小:+采样大小);
        长presentationTimeUs = 0;
        如果(采样大小℃,){
            Log.i(LOG_TAG,电锯惊魂输入流的结束!);
            sawInputEOS = TRUE;
            的采样大小= 0;
        } 其他 {
            presentationTimeUs = extractor.getSampleTime();
            Log.i(LOG_TAG,presentationTimeUs+ presentationTimeUs);
        }

        codec.queueInputBuffer(inputBufIndex,
                               0,//偏移
                               采样大小,
                               presentationTimeUs,
                               sawInputEOS?媒体codec.BUFFER_FLAG_END_OF_STREAM:0);
        如果(!sawInputEOS){
            Log.i(LOG_TAG,extractor.advance());
            extractor.advance();

        }
     }

}
}
 

编辑:添加的logcat输出额外的想法

  03-10 16:47:54.115:I / mediade coderexample(24643):================== =========
03-10 16:47:54.115:I / mediade coderexample(24643):URL ....
03-10 16:47:54.115:I / mediade coderexample(24643):MIME类型:音频/ MPEG
03-10 16:47:54.115:I / mediade coderexample(24643):采样率:32000
03-10 16:47:54.115:I / mediade coderexample(24643):===========================
03-10 16:47:54.120:I / OMXClient(24643):使用客户端OMX MUX。
03-10 16:47:54.150:I /混响(24643):GETPID()24643,IPCThreadState ::自() - > getCallingPid()24643
03-10 16:47:54.150:I / mediade coderexample(24643):inputLoop()
03-10 16:47:54.155:I / mediade coderexample(24643):inputBufIndex:0
03-10 16:47:54.155:I / mediade coderexample(24643):采样大小:432
03-10 16:47:54.155:I / mediade coderexample(24643):presentationTimeUs 0
03-10 16:47:54.155:I / mediade coderexample(24643):extractor.advance()
03-10 16:47:59.085:D / HTTPBase(24643):[2]的网络带宽= 187 Kbps的
03-10 16:47:59.085:D / NuCachedSource2(24643):剩余(64K),HighWaterThreshold(20480K)
03-10 16:48:04.635:D / HTTPBase(24643):[3]的网络带宽= 141 Kbps的
03-10 16:48:04.635:D / NuCachedSource2(24643):剩余(128K),HighWaterThreshold(20480K)
03-10 16:48:09.930:D / HTTPBase(24643):[4]的网络带宽= 127 Kbps的
03-10 16:48:09.930:D / NuCachedSource2(24643):剩余(192K),HighWaterThreshold(20480K)
03-10 16:48:15.255:D / HTTPBase(24643):[5]的网络带宽= 120 Kbps的
03-10 16:48:15.255:D / NuCachedSource2(24643):剩余(256K),HighWaterThreshold(20480K)
03-10 16:48:20.775:D / HTTPBase(24643):[6]的网络带宽= 115 Kbps的
03-10 16:48:20.775:D / NuCachedSource2(24643):剩余(320K),HighWaterThreshold(20480K)
03-10 16:48:26.510:D / HTTPBase(24643):[7]网络带宽= 111 Kbps的
03-10 16:48:26.510:D / NuCachedSource2(24643):剩余(384K),HighWaterThreshold(20480K)
03-10 16:48:31.740:D / HTTPBase(24643):[8]网络带宽= 109 Kbps的
03-10 16:48:31.740:D / NuCachedSource2(24643):剩余(448K),HighWaterThreshold(20480K)
03-10 16:48:37.260:D / HTTPBase(24643):[9]网络带宽= 107 Kbps的
03-10 16:48:37.260:D / NuCachedSource2(24643):剩余(512K),HighWaterThreshold(20480K)
03-10 16:48:42.620:D / HTTPBase(24643):[10]网络带宽= 106 Kbps的
03-10 16:48:42.620:D / NuCachedSource2(24643):剩余(576K),HighWaterThreshold(20480K)
03-10 16:48:48.295:D / HTTPBase(24643):[11]网络带宽= 105 Kbps的
03-10 16:48:48.295:D / NuCachedSource2(24643):剩余(640K),HighWaterThreshold(20480K)
03-10 16:48:53.735:D / HTTPBase(24643):[12]网络带宽= 104 Kbps的
03-10 16:48:53.735:D / NuCachedSource2(24643):剩余(704K),HighWaterThreshold(20480K)
03-10 16:48:59.115:D / HTTPBase(24643):[13]网络带宽= 103 Kbps的
03-10 16:48:59.115:D / NuCachedSource2(24643):剩余(768K),HighWaterThreshold(20480K)
03-10 16:49:04.480:D / HTTPBase(24643):[14]网络带宽= 103 Kbps的
03-10 16:49:04.480:D / NuCachedSource2(24643):剩余(832K),HighWaterThreshold(20480K)
03-10 16:49:09.955:D / HTTPBase(24643):[15]网络带宽= 102 Kbps的
 

解决方案

的onCreate的code()建议您对误解如何媒体codeC 的作品。您的code是当前:

 的onCreate(){
    ...建立...
    输入();
    输出();
}
 

媒体codeC 运行在接入单位。对于视频,每个调用的输入/输出都可以获得单个视频帧。我还没有与声音,但我的理解是,它的行为与此类似。你没有得到加载到输入缓冲整个文件,并且它不为你播放流;你拍一小块的文件,交给了德codeR,而且双手背德codeD数据(如YUV视频缓冲器或PCM音频数据)。然后,采取一切必要措施,以发挥这些数据。

所以,你的榜样会,充其量,德codeA的音频秒的小数部分。你需要做提交 - 输入得到输出与终端的流妥善处理循环。你可以看到各种 bigflake 这个例子做了​​视频。它看起来像你的code有必要的部分。

您使用的是-1(无限)超时,所以你要提供输入的一个缓冲区,并永远等待输出的缓冲。在视频中这是行不通的 - 去codeRS我测试似乎想约四缓冲器输入之前,他们会产生任何输出 - 但是我还没有与音频的工作,所以我不知道这是否是预期工作。由于您的code是挂我猜它不是。将超时更改为(比方说)10000,看看这可能是有用的,如果挂消失。

我假设这是一个实验,你不是真的要做到这一切的onCreate()。 : - )

I am trying to play an mp3 stream using MediaExtractor/MediaCodec. MediaPlayer is out of the question due to latency and long buffer size.

The only sample code i have found is this: http://dpsm.wordpress.com/category/android/

The code samples are only parcial (?) and use a File instead of a stream.

I have been trying to adapt this example to play an Audio Stream but i can't get my head around how this is supposed to work. The Android documentation as usual is no help.

I understand that first we get information about the stream, presumably setup the AudioTrack with this information ( code sample does include AudioTrack initialization ?) and then open an input buffer and output buffer.

I have recreated code for this, with what i can guess would be the missing parts, but no audio comes out of this.

Can someone point me in the right direction to understand how this is supposed to work?

public final String LOG_TAG = "mediadecoderexample";
private static int TIMEOUT_US = -1;
MediaCodec codec;
MediaExtractor extractor;

MediaFormat format;
ByteBuffer[] codecInputBuffers;
ByteBuffer[] codecOutputBuffers;
Boolean sawInputEOS = false;
Boolean sawOutputEOS = false;
AudioTrack mAudioTrack;
BufferInfo info;

@Override
protected void onCreate(Bundle savedInstanceState) {
    super.onCreate(savedInstanceState);
    setContentView(R.layout.activity_main);

    String url = "http://82.201.100.9:8000/RADIO538_WEB_MP3";
    extractor = new MediaExtractor();

    try {
        extractor.setDataSource(url);
    } catch (IOException e) {
    }

    format = extractor.getTrackFormat(0);
    String mime = format.getString(MediaFormat.KEY_MIME);
    int sampleRate = format.getInteger(MediaFormat.KEY_SAMPLE_RATE);

    Log.i(LOG_TAG, "===========================");
    Log.i(LOG_TAG, "url "+url);
    Log.i(LOG_TAG, "mime type : "+mime);
    Log.i(LOG_TAG, "sample rate : "+sampleRate);
    Log.i(LOG_TAG, "===========================");

    codec = MediaCodec.createDecoderByType(mime);
    codec.configure(format, null , null , 0);
    codec.start();

    codecInputBuffers = codec.getInputBuffers();
    codecOutputBuffers = codec.getOutputBuffers();

    extractor.selectTrack(0); 

    mAudioTrack = new AudioTrack(
            AudioManager.STREAM_MUSIC, 
            sampleRate, 
            AudioFormat.CHANNEL_OUT_STEREO, 
            AudioFormat.ENCODING_PCM_16BIT, 
            AudioTrack.getMinBufferSize (
                    sampleRate, 
                    AudioFormat.CHANNEL_OUT_STEREO, 
                    AudioFormat.ENCODING_PCM_16BIT
                    ), 
            AudioTrack.MODE_STREAM
            );

    info = new BufferInfo();


    input();
    output();


}

private void output()
{
    final int res = codec.dequeueOutputBuffer(info, TIMEOUT_US);
    if (res >= 0) {
        int outputBufIndex = res;
        ByteBuffer buf = codecOutputBuffers[outputBufIndex];

        final byte[] chunk = new byte[info.size];
        buf.get(chunk); // Read the buffer all at once
        buf.clear(); // ** MUST DO!!! OTHERWISE THE NEXT TIME YOU GET THIS SAME BUFFER BAD THINGS WILL HAPPEN

        if (chunk.length > 0) {
            mAudioTrack.write(chunk, 0, chunk.length);
        }
        codec.releaseOutputBuffer(outputBufIndex, false /* render */);

        if ((info.flags & MediaCodec.BUFFER_FLAG_END_OF_STREAM) != 0) {
            sawOutputEOS = true;
        }
    } else if (res == MediaCodec.INFO_OUTPUT_BUFFERS_CHANGED) {
        codecOutputBuffers = codec.getOutputBuffers();
    } else if (res == MediaCodec.INFO_OUTPUT_FORMAT_CHANGED) {
        final MediaFormat oformat = codec.getOutputFormat();
        Log.d(LOG_TAG, "Output format has changed to " + oformat);
        mAudioTrack.setPlaybackRate(oformat.getInteger(MediaFormat.KEY_SAMPLE_RATE));
    }

}

private void input()
{
    Log.i(LOG_TAG, "inputLoop()");
    int inputBufIndex = codec.dequeueInputBuffer(TIMEOUT_US);
    Log.i(LOG_TAG, "inputBufIndex : "+inputBufIndex);

    if (inputBufIndex >= 0) {   
        ByteBuffer dstBuf = codecInputBuffers[inputBufIndex];

        int sampleSize = extractor.readSampleData(dstBuf, 0);
        Log.i(LOG_TAG, "sampleSize : "+sampleSize);
        long presentationTimeUs = 0;
        if (sampleSize < 0) {
            Log.i(LOG_TAG, "Saw input end of stream!");
            sawInputEOS = true;
            sampleSize = 0;
        } else {
            presentationTimeUs = extractor.getSampleTime();
            Log.i(LOG_TAG, "presentationTimeUs "+presentationTimeUs);
        }

        codec.queueInputBuffer(inputBufIndex,
                               0, //offset
                               sampleSize,
                               presentationTimeUs,
                               sawInputEOS ? MediaCodec.BUFFER_FLAG_END_OF_STREAM : 0);
        if (!sawInputEOS) {
            Log.i(LOG_TAG, "extractor.advance()");
            extractor.advance();

        }
     }

}
}

Edit: adding logcat output for extra ideas.

03-10 16:47:54.115: I/mediadecoderexample(24643): ===========================
03-10 16:47:54.115: I/mediadecoderexample(24643): url ....
03-10 16:47:54.115: I/mediadecoderexample(24643): mime type : audio/mpeg
03-10 16:47:54.115: I/mediadecoderexample(24643): sample rate : 32000
03-10 16:47:54.115: I/mediadecoderexample(24643): ===========================
03-10 16:47:54.120: I/OMXClient(24643): Using client-side OMX mux.
03-10 16:47:54.150: I/Reverb(24643):  getpid() 24643, IPCThreadState::self()->getCallingPid() 24643
03-10 16:47:54.150: I/mediadecoderexample(24643): inputLoop()
03-10 16:47:54.155: I/mediadecoderexample(24643): inputBufIndex : 0
03-10 16:47:54.155: I/mediadecoderexample(24643): sampleSize : 432
03-10 16:47:54.155: I/mediadecoderexample(24643): presentationTimeUs 0
03-10 16:47:54.155: I/mediadecoderexample(24643): extractor.advance()
03-10 16:47:59.085: D/HTTPBase(24643): [2] Network BandWidth = 187 Kbps
03-10 16:47:59.085: D/NuCachedSource2(24643): Remaining (64K), HighWaterThreshold (20480K)
03-10 16:48:04.635: D/HTTPBase(24643): [3] Network BandWidth = 141 Kbps
03-10 16:48:04.635: D/NuCachedSource2(24643): Remaining (128K), HighWaterThreshold (20480K)
03-10 16:48:09.930: D/HTTPBase(24643): [4] Network BandWidth = 127 Kbps
03-10 16:48:09.930: D/NuCachedSource2(24643): Remaining (192K), HighWaterThreshold (20480K)
03-10 16:48:15.255: D/HTTPBase(24643): [5] Network BandWidth = 120 Kbps
03-10 16:48:15.255: D/NuCachedSource2(24643): Remaining (256K), HighWaterThreshold (20480K)
03-10 16:48:20.775: D/HTTPBase(24643): [6] Network BandWidth = 115 Kbps
03-10 16:48:20.775: D/NuCachedSource2(24643): Remaining (320K), HighWaterThreshold (20480K)
03-10 16:48:26.510: D/HTTPBase(24643): [7] Network BandWidth = 111 Kbps
03-10 16:48:26.510: D/NuCachedSource2(24643): Remaining (384K), HighWaterThreshold (20480K)
03-10 16:48:31.740: D/HTTPBase(24643): [8] Network BandWidth = 109 Kbps
03-10 16:48:31.740: D/NuCachedSource2(24643): Remaining (448K), HighWaterThreshold (20480K)
03-10 16:48:37.260: D/HTTPBase(24643): [9] Network BandWidth = 107 Kbps
03-10 16:48:37.260: D/NuCachedSource2(24643): Remaining (512K), HighWaterThreshold (20480K)
03-10 16:48:42.620: D/HTTPBase(24643): [10] Network BandWidth = 106 Kbps
03-10 16:48:42.620: D/NuCachedSource2(24643): Remaining (576K), HighWaterThreshold (20480K)
03-10 16:48:48.295: D/HTTPBase(24643): [11] Network BandWidth = 105 Kbps
03-10 16:48:48.295: D/NuCachedSource2(24643): Remaining (640K), HighWaterThreshold (20480K)
03-10 16:48:53.735: D/HTTPBase(24643): [12] Network BandWidth = 104 Kbps
03-10 16:48:53.735: D/NuCachedSource2(24643): Remaining (704K), HighWaterThreshold (20480K)
03-10 16:48:59.115: D/HTTPBase(24643): [13] Network BandWidth = 103 Kbps
03-10 16:48:59.115: D/NuCachedSource2(24643): Remaining (768K), HighWaterThreshold (20480K)
03-10 16:49:04.480: D/HTTPBase(24643): [14] Network BandWidth = 103 Kbps
03-10 16:49:04.480: D/NuCachedSource2(24643): Remaining (832K), HighWaterThreshold (20480K)
03-10 16:49:09.955: D/HTTPBase(24643): [15] Network BandWidth = 102 Kbps

解决方案

The code in onCreate() suggests you have a misconception about how MediaCodec works. Your code is currently:

onCreate() {
    ...setup...
    input();
    output();
}

MediaCodec operates on access units. For video, each call to input/output would get you a single frame of video. I haven't worked with audio, but my understanding is that it behaves similarly. You don't get the entire file loaded into an input buffer, and it doesn't play the stream for you; you take one small piece of the file, hand it to the decoder, and it hands back decoded data (e.g. a YUV video buffer or PCM audio data). You then do whatever is necessary to play that data.

So your example would, at best, decode a fraction of a second of audio. You need to be doing submit-input-get-output in a loop with proper handling of end-of-stream. You can see this done for video in the various bigflake examples. It looks like your code has the necessary pieces.

You're using a timeout of -1 (infinite), so you're going to supply one buffer of input and wait forever for a buffer of output. In video this wouldn't work -- the decoders I've tested seem to want about four buffers of input before they'll produce any output -- but again I haven't worked with audio, so I'm not sure if this is expected to work. Since your code is hanging I'm guessing it's not. It might be useful to change the timeout to (say) 10000 and see if the hang goes away.

I'm assuming this is an experiment and you're not really going to do all this in onCreate(). :-)

这篇关于Android的MediaExtractor和MP3流的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆