机器人 - 如何多路复用器的音频文件和视频文件? [英] android - How to mux audio file and video file?

查看:529
本文介绍了机器人 - 如何多路复用器的音频文件和视频文件?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我有一个从麦克风和一个MP4视频文件记录,3GP文件。 我想多路复用器的音频文件和视频文件到一个MP4文件并将其保存。 我搜索了很多,但没有发现任何东西,使用的Andr​​oid MediaMuxer API帮助。 MediaMuxer API

i have a 3gp file that is recorded from the microphone and a mp4 video file. i want to mux audio file and video file in to a mp4 file and save it. i searched a lot but didn't find any thing helpful for using MediaMuxer api of android. MediaMuxer api

更新:这是我的方法混流两个文件,​​我在这一个例外。 其原因是,在目标mp4文件没有任何轨道! 有人可以帮我添加音频和视频轨MUXER ??

UPDATE : this is my method that mux two files , i have an Exception in it. and the reason is that the destination mp4 file doesn't have any track! can someOne help me with adding audio and video track to muxer??

异常

java.lang.IllegalStateException: Failed to stop the muxer

我的code:

my code:

private void cloneMediaUsingMuxer( String dstMediaPath) throws IOException {
    // Set up MediaExtractor to read from the source.
    MediaExtractor soundExtractor = new MediaExtractor();
    soundExtractor.setDataSource(audioFilePath);
    MediaExtractor videoExtractor = new MediaExtractor();
    AssetFileDescriptor afd2 = getAssets().openFd("Produce.MP4");
    videoExtractor.setDataSource(afd2.getFileDescriptor() , afd2.getStartOffset(),afd2.getLength());


    //PATH
    //extractor.setDataSource();
    int trackCount = soundExtractor.getTrackCount();
    int trackCount2 = soundExtractor.getTrackCount();

    //assertEquals("wrong number of tracks", expectedTrackCount, trackCount);
    // Set up MediaMuxer for the destination.
    MediaMuxer muxer;
    muxer = new MediaMuxer(dstMediaPath, MediaMuxer.OutputFormat.MUXER_OUTPUT_MPEG_4);
    // Set up the tracks.
    HashMap<Integer, Integer> indexMap = new HashMap<Integer, Integer>(trackCount);
    for (int i = 0; i < trackCount; i++) {
        soundExtractor.selectTrack(i);
        MediaFormat SoundFormat = soundExtractor.getTrackFormat(i);
        int dstIndex = muxer.addTrack(SoundFormat);
        indexMap.put(i, dstIndex);
    }

    HashMap<Integer, Integer> indexMap2 = new HashMap<Integer, Integer>(trackCount2);
    for (int i = 0; i < trackCount2; i++) {
        videoExtractor.selectTrack(i);
        MediaFormat videoFormat = videoExtractor.getTrackFormat(i);
        int dstIndex2 = muxer.addTrack(videoFormat);
        indexMap.put(i, dstIndex2);
    }


    // Copy the samples from MediaExtractor to MediaMuxer.
    boolean sawEOS = false;
    int bufferSize = MAX_SAMPLE_SIZE;
    int frameCount = 0;
    int offset = 100;
    ByteBuffer dstBuf = ByteBuffer.allocate(bufferSize);
    MediaCodec.BufferInfo bufferInfo = new MediaCodec.BufferInfo();
    MediaCodec.BufferInfo bufferInfo2 = new MediaCodec.BufferInfo();

    muxer.start();
    while (!sawEOS) {
        bufferInfo.offset = offset;
        bufferInfo.size = soundExtractor.readSampleData(dstBuf, offset);
        bufferInfo2.offset = offset;
        bufferInfo2.size = videoExtractor.readSampleData(dstBuf, offset);

        if (bufferInfo.size < 0) {
            sawEOS = true;
            bufferInfo.size = 0;
            bufferInfo2.size = 0;
        }else if(bufferInfo2.size < 0){
            sawEOS = true;
            bufferInfo.size = 0;
            bufferInfo2.size = 0;
        }
        else {
            bufferInfo.presentationTimeUs = soundExtractor.getSampleTime();
            bufferInfo2.presentationTimeUs = videoExtractor.getSampleTime();
            //bufferInfo.flags = extractor.getSampleFlags();
            int trackIndex = soundExtractor.getSampleTrackIndex();
            int trackIndex2 = videoExtractor.getSampleTrackIndex();
            muxer.writeSampleData(indexMap.get(trackIndex), dstBuf,
                    bufferInfo);

            soundExtractor.advance();
            videoExtractor.advance();
            frameCount++;

        }
    }

    Toast.makeText(getApplicationContext(),"f:"+frameCount,Toast.LENGTH_SHORT).show();

    muxer.stop();
    muxer.release();

}

更新2:问题解决了!检查我的回答我的问题。

感谢您的帮助

推荐答案

我有一些问题的音频和视频文件的音轨。 他们走了,每一件事都确定了我的code,但现在你可以使用它的合并音频文件和视频文件一起

I had some problem with tracks of audio and video files. they gone and every thing is ok with my code , but Now you can use it for merging an audio file and a video file together.

code:

private void muxing() {

String outputFile = "";

try {

    File file = new File(Environment.getExternalStorageDirectory() + File.separator + "final2.mp4");
    file.createNewFile();
    outputFile = file.getAbsolutePath();

    MediaExtractor videoExtractor = new MediaExtractor();
    AssetFileDescriptor afdd = getAssets().openFd("Produce.MP4");
    videoExtractor.setDataSource(afdd.getFileDescriptor() ,afdd.getStartOffset(),afdd.getLength());

    MediaExtractor audioExtractor = new MediaExtractor();
    audioExtractor.setDataSource(audioFilePath);

    Log.d(TAG, "Video Extractor Track Count " + videoExtractor.getTrackCount() );
    Log.d(TAG, "Audio Extractor Track Count " + audioExtractor.getTrackCount() );

    MediaMuxer muxer = new MediaMuxer(outputFile, MediaMuxer.OutputFormat.MUXER_OUTPUT_MPEG_4);

    videoExtractor.selectTrack(0);
    MediaFormat videoFormat = videoExtractor.getTrackFormat(0);
    int videoTrack = muxer.addTrack(videoFormat);

    audioExtractor.selectTrack(0);
    MediaFormat audioFormat = audioExtractor.getTrackFormat(0);
    int audioTrack = muxer.addTrack(audioFormat);

    Log.d(TAG, "Video Format " + videoFormat.toString() );
    Log.d(TAG, "Audio Format " + audioFormat.toString() );

    boolean sawEOS = false;
    int frameCount = 0;
    int offset = 100;
    int sampleSize = 256 * 1024;
    ByteBuffer videoBuf = ByteBuffer.allocate(sampleSize);
    ByteBuffer audioBuf = ByteBuffer.allocate(sampleSize);
    MediaCodec.BufferInfo videoBufferInfo = new MediaCodec.BufferInfo();
    MediaCodec.BufferInfo audioBufferInfo = new MediaCodec.BufferInfo();


    videoExtractor.seekTo(0, MediaExtractor.SEEK_TO_CLOSEST_SYNC);
    audioExtractor.seekTo(0, MediaExtractor.SEEK_TO_CLOSEST_SYNC);

    muxer.start();

    while (!sawEOS)
    {
        videoBufferInfo.offset = offset;
        videoBufferInfo.size = videoExtractor.readSampleData(videoBuf, offset);


        if (videoBufferInfo.size < 0 || audioBufferInfo.size < 0)
        {
            Log.d(TAG, "saw input EOS.");
            sawEOS = true;
            videoBufferInfo.size = 0;

        }
        else
        {
            videoBufferInfo.presentationTimeUs = videoExtractor.getSampleTime();
            videoBufferInfo.flags = videoExtractor.getSampleFlags();
            muxer.writeSampleData(videoTrack, videoBuf, videoBufferInfo);
            videoExtractor.advance();


            frameCount++;
            Log.d(TAG, "Frame (" + frameCount + ") Video PresentationTimeUs:" + videoBufferInfo.presentationTimeUs +" Flags:" + videoBufferInfo.flags +" Size(KB) " + videoBufferInfo.size / 1024);
            Log.d(TAG, "Frame (" + frameCount + ") Audio PresentationTimeUs:" + audioBufferInfo.presentationTimeUs +" Flags:" + audioBufferInfo.flags +" Size(KB) " + audioBufferInfo.size / 1024);

        }
    }

    Toast.makeText(getApplicationContext() , "frame:" + frameCount , Toast.LENGTH_SHORT).show();



    boolean sawEOS2 = false;
    int frameCount2 =0;
    while (!sawEOS2)
    {
        frameCount2++;

        audioBufferInfo.offset = offset;
        audioBufferInfo.size = audioExtractor.readSampleData(audioBuf, offset);

        if (videoBufferInfo.size < 0 || audioBufferInfo.size < 0)
        {
            Log.d(TAG, "saw input EOS.");
            sawEOS2 = true;
            audioBufferInfo.size = 0;
        }
        else
        {
            audioBufferInfo.presentationTimeUs = audioExtractor.getSampleTime();
            audioBufferInfo.flags = audioExtractor.getSampleFlags();
            muxer.writeSampleData(audioTrack, audioBuf, audioBufferInfo);
            audioExtractor.advance();


            Log.d(TAG, "Frame (" + frameCount + ") Video PresentationTimeUs:" + videoBufferInfo.presentationTimeUs +" Flags:" + videoBufferInfo.flags +" Size(KB) " + videoBufferInfo.size / 1024);
            Log.d(TAG, "Frame (" + frameCount + ") Audio PresentationTimeUs:" + audioBufferInfo.presentationTimeUs +" Flags:" + audioBufferInfo.flags +" Size(KB) " + audioBufferInfo.size / 1024);

        }
    }

    Toast.makeText(getApplicationContext() , "frame:" + frameCount2 , Toast.LENGTH_SHORT).show();

    muxer.stop();
    muxer.release();


} catch (IOException e) {
    Log.d(TAG, "Mixer Error 1 " + e.getMessage());
} catch (Exception e) {
    Log.d(TAG, "Mixer Error 2 " + e.getMessage());
}

}

由于这些样本codeS: MediaMuxer样品codeS -really完美

thanks to these sample codes:MediaMuxer Sample Codes-really perfect

这篇关于机器人 - 如何多路复用器的音频文件和视频文件?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆