android - 如何复用音频文件和视频文件? [英] android - How to mux audio file and video file?

查看:28
本文介绍了android - 如何复用音频文件和视频文件?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我有一个从麦克风录制的 3gp 文件和一个 mp4 视频文件.我想将音频文件和视频文件混合到一个 mp4 文件中并保存.我搜索了很多,但没有发现任何对使用 android 的 MediaMuxer api 有帮助的东西.MediaMuxer api

i have a 3gp file that is recorded from the microphone and a mp4 video file. i want to mux audio file and video file in to a mp4 file and save it. i searched a lot but didn't find any thing helpful for using MediaMuxer api of android. MediaMuxer api

更新:这是我多路复用两个文件的方法,我有一个异常.原因是目标 mp4 文件没有任何曲目!有人可以帮我将音频和视频轨道添加到多路复用器吗??

UPDATE : this is my method that mux two files , i have an Exception in it. and the reason is that the destination mp4 file doesn't have any track! can someOne help me with adding audio and video track to muxer??

异常

java.lang.IllegalStateException: Failed to stop the muxer

我的代码:

private void cloneMediaUsingMuxer( String dstMediaPath) throws IOException {
    // Set up MediaExtractor to read from the source.
    MediaExtractor soundExtractor = new MediaExtractor();
    soundExtractor.setDataSource(audioFilePath);
    MediaExtractor videoExtractor = new MediaExtractor();
    AssetFileDescriptor afd2 = getAssets().openFd("Produce.MP4");
    videoExtractor.setDataSource(afd2.getFileDescriptor() , afd2.getStartOffset(),afd2.getLength());


    //PATH
    //extractor.setDataSource();
    int trackCount = soundExtractor.getTrackCount();
    int trackCount2 = soundExtractor.getTrackCount();

    //assertEquals("wrong number of tracks", expectedTrackCount, trackCount);
    // Set up MediaMuxer for the destination.
    MediaMuxer muxer;
    muxer = new MediaMuxer(dstMediaPath, MediaMuxer.OutputFormat.MUXER_OUTPUT_MPEG_4);
    // Set up the tracks.
    HashMap<Integer, Integer> indexMap = new HashMap<Integer, Integer>(trackCount);
    for (int i = 0; i < trackCount; i++) {
        soundExtractor.selectTrack(i);
        MediaFormat SoundFormat = soundExtractor.getTrackFormat(i);
        int dstIndex = muxer.addTrack(SoundFormat);
        indexMap.put(i, dstIndex);
    }

    HashMap<Integer, Integer> indexMap2 = new HashMap<Integer, Integer>(trackCount2);
    for (int i = 0; i < trackCount2; i++) {
        videoExtractor.selectTrack(i);
        MediaFormat videoFormat = videoExtractor.getTrackFormat(i);
        int dstIndex2 = muxer.addTrack(videoFormat);
        indexMap.put(i, dstIndex2);
    }


    // Copy the samples from MediaExtractor to MediaMuxer.
    boolean sawEOS = false;
    int bufferSize = MAX_SAMPLE_SIZE;
    int frameCount = 0;
    int offset = 100;
    ByteBuffer dstBuf = ByteBuffer.allocate(bufferSize);
    MediaCodec.BufferInfo bufferInfo = new MediaCodec.BufferInfo();
    MediaCodec.BufferInfo bufferInfo2 = new MediaCodec.BufferInfo();

    muxer.start();
    while (!sawEOS) {
        bufferInfo.offset = offset;
        bufferInfo.size = soundExtractor.readSampleData(dstBuf, offset);
        bufferInfo2.offset = offset;
        bufferInfo2.size = videoExtractor.readSampleData(dstBuf, offset);

        if (bufferInfo.size < 0) {
            sawEOS = true;
            bufferInfo.size = 0;
            bufferInfo2.size = 0;
        }else if(bufferInfo2.size < 0){
            sawEOS = true;
            bufferInfo.size = 0;
            bufferInfo2.size = 0;
        }
        else {
            bufferInfo.presentationTimeUs = soundExtractor.getSampleTime();
            bufferInfo2.presentationTimeUs = videoExtractor.getSampleTime();
            //bufferInfo.flags = extractor.getSampleFlags();
            int trackIndex = soundExtractor.getSampleTrackIndex();
            int trackIndex2 = videoExtractor.getSampleTrackIndex();
            muxer.writeSampleData(indexMap.get(trackIndex), dstBuf,
                    bufferInfo);

            soundExtractor.advance();
            videoExtractor.advance();
            frameCount++;

        }
    }

    Toast.makeText(getApplicationContext(),"f:"+frameCount,Toast.LENGTH_SHORT).show();

    muxer.stop();
    muxer.release();

}

更新2:问题解决了!检查我对问题的回答.

感谢您的帮助

推荐答案

我在处理音频和视频文件的轨道时遇到了一些问题.它们不见了,我的代码一切正常,但现在您可以使用它将音频文件和视频文件合并在一起.

I had some problem with tracks of audio and video files. they gone and every thing is ok with my code , but Now you can use it for merging an audio file and a video file together.

代码:

private void muxing() {

String outputFile = "";

try {

    File file = new File(Environment.getExternalStorageDirectory() + File.separator + "final2.mp4");
    file.createNewFile();
    outputFile = file.getAbsolutePath();

    MediaExtractor videoExtractor = new MediaExtractor();
    AssetFileDescriptor afdd = getAssets().openFd("Produce.MP4");
    videoExtractor.setDataSource(afdd.getFileDescriptor() ,afdd.getStartOffset(),afdd.getLength());

    MediaExtractor audioExtractor = new MediaExtractor();
    audioExtractor.setDataSource(audioFilePath);

    Log.d(TAG, "Video Extractor Track Count " + videoExtractor.getTrackCount() );
    Log.d(TAG, "Audio Extractor Track Count " + audioExtractor.getTrackCount() );

    MediaMuxer muxer = new MediaMuxer(outputFile, MediaMuxer.OutputFormat.MUXER_OUTPUT_MPEG_4);

    videoExtractor.selectTrack(0);
    MediaFormat videoFormat = videoExtractor.getTrackFormat(0);
    int videoTrack = muxer.addTrack(videoFormat);

    audioExtractor.selectTrack(0);
    MediaFormat audioFormat = audioExtractor.getTrackFormat(0);
    int audioTrack = muxer.addTrack(audioFormat);

    Log.d(TAG, "Video Format " + videoFormat.toString() );
    Log.d(TAG, "Audio Format " + audioFormat.toString() );

    boolean sawEOS = false;
    int frameCount = 0;
    int offset = 100;
    int sampleSize = 256 * 1024;
    ByteBuffer videoBuf = ByteBuffer.allocate(sampleSize);
    ByteBuffer audioBuf = ByteBuffer.allocate(sampleSize);
    MediaCodec.BufferInfo videoBufferInfo = new MediaCodec.BufferInfo();
    MediaCodec.BufferInfo audioBufferInfo = new MediaCodec.BufferInfo();


    videoExtractor.seekTo(0, MediaExtractor.SEEK_TO_CLOSEST_SYNC);
    audioExtractor.seekTo(0, MediaExtractor.SEEK_TO_CLOSEST_SYNC);

    muxer.start();

    while (!sawEOS)
    {
        videoBufferInfo.offset = offset;
        videoBufferInfo.size = videoExtractor.readSampleData(videoBuf, offset);


        if (videoBufferInfo.size < 0 || audioBufferInfo.size < 0)
        {
            Log.d(TAG, "saw input EOS.");
            sawEOS = true;
            videoBufferInfo.size = 0;

        }
        else
        {
            videoBufferInfo.presentationTimeUs = videoExtractor.getSampleTime();
            videoBufferInfo.flags = videoExtractor.getSampleFlags();
            muxer.writeSampleData(videoTrack, videoBuf, videoBufferInfo);
            videoExtractor.advance();


            frameCount++;
            Log.d(TAG, "Frame (" + frameCount + ") Video PresentationTimeUs:" + videoBufferInfo.presentationTimeUs +" Flags:" + videoBufferInfo.flags +" Size(KB) " + videoBufferInfo.size / 1024);
            Log.d(TAG, "Frame (" + frameCount + ") Audio PresentationTimeUs:" + audioBufferInfo.presentationTimeUs +" Flags:" + audioBufferInfo.flags +" Size(KB) " + audioBufferInfo.size / 1024);

        }
    }

    Toast.makeText(getApplicationContext() , "frame:" + frameCount , Toast.LENGTH_SHORT).show();



    boolean sawEOS2 = false;
    int frameCount2 =0;
    while (!sawEOS2)
    {
        frameCount2++;

        audioBufferInfo.offset = offset;
        audioBufferInfo.size = audioExtractor.readSampleData(audioBuf, offset);

        if (videoBufferInfo.size < 0 || audioBufferInfo.size < 0)
        {
            Log.d(TAG, "saw input EOS.");
            sawEOS2 = true;
            audioBufferInfo.size = 0;
        }
        else
        {
            audioBufferInfo.presentationTimeUs = audioExtractor.getSampleTime();
            audioBufferInfo.flags = audioExtractor.getSampleFlags();
            muxer.writeSampleData(audioTrack, audioBuf, audioBufferInfo);
            audioExtractor.advance();


            Log.d(TAG, "Frame (" + frameCount + ") Video PresentationTimeUs:" + videoBufferInfo.presentationTimeUs +" Flags:" + videoBufferInfo.flags +" Size(KB) " + videoBufferInfo.size / 1024);
            Log.d(TAG, "Frame (" + frameCount + ") Audio PresentationTimeUs:" + audioBufferInfo.presentationTimeUs +" Flags:" + audioBufferInfo.flags +" Size(KB) " + audioBufferInfo.size / 1024);

        }
    }

    Toast.makeText(getApplicationContext() , "frame:" + frameCount2 , Toast.LENGTH_SHORT).show();

    muxer.stop();
    muxer.release();


} catch (IOException e) {
    Log.d(TAG, "Mixer Error 1 " + e.getMessage());
} catch (Exception e) {
    Log.d(TAG, "Mixer Error 2 " + e.getMessage());
}

}

感谢这些示例代码:MediaMuxer 示例代码——非常完美

这篇关于android - 如何复用音频文件和视频文件?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆