PhoneGap的混合音频文件 [英] Phonegap mixing audio files

查看:148
本文介绍了PhoneGap的混合音频文件的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我要建使用PhoneGap的iOS版应用卡拉OK

I'm building a karaoke app using Phonegap for Ios.

我在WWW / assets文件夹的音频文件,我能使用media.play()功能的发挥

I have audio files in the www/assets folder that I am able to play using the media.play()function

这允许用户收听背景音轨。虽然媒体播放另一媒体实例记录。

This allows the user to listen to the backing track. While the media is playing another Media instance is recording.

一旦录制完成,我需要笼罩背景音轨的录音文件,我没有的我怎么会去这样做的想法。

Once the recording has finished I need to lay the voice recording file over the backing track and I have no idea of how I might go about doing this.

我想可能工作的一种方法是使用Web音频API - 我有以下的code这是我从的 HTML5岩这两个文件加载了成AudioContext并让我同时播放两种。不过,我想这样做是两个缓冲区写入一个.wav文件。有没有什么办法可以结合起来源1和源2成一个新的文件?

One approach I thought might work is to use the WEb Audio API - I have the following code which I took from HTML5 Rocks Which loads up the two files into an AudioContext and allows me to play both simultaneously. However, what I would like to do is write the two buffers into a single .wav file. Is there any way I can combine source1 and source2 into a single new file?

var context;
var bufferLoader;

function init() {
    // Fix up prefixing
    window.AudioContext = window.AudioContext || window.webkitAudioContext;
    context = new AudioContext();

    bufferLoader = new BufferLoader(
        context,
        [
            'backingTrack.wav',
            'voice.wav',
        ],
        finishedLoading
    );

    bufferLoader.load();
}

function finishedLoading(bufferList) {
    // Create two sources and play them both together.
    var source1 = context.createBufferSource();
    var source2 = context.createBufferSource();
    source1.buffer = bufferList[0];
    source2.buffer = bufferList[1];

    source1.connect(context.destination);
    source2.connect(context.destination);
    source1.start(0);
    source2.start(0);
}


function BufferLoader(context, urlList, callback) {
    this.context = context;
    this.urlList = urlList;
    this.onload = callback;
    this.bufferList = new Array();
    this.loadCount = 0;
}

BufferLoader.prototype.loadBuffer = function(url, index) {
    // Load buffer asynchronously
    var request = new XMLHttpRequest();
    request.open("GET", url, true);
    request.responseType = "arraybuffer";

    var loader = this;

    request.onload = function() {
        // Asynchronously decode the audio file data in request.response
        loader.context.decodeAudioData(
            request.response,
            function(buffer) {
                if (!buffer) {
                    alert('error decoding file data: ' + url);
                    return;
                }
                loader.bufferList[index] = buffer;
                if (++loader.loadCount == loader.urlList.length)
                    loader.onload(loader.bufferList);
            },
            function(error) {
                console.error('decodeAudioData error', error);
            }
        );
    }

    request.onerror = function() {
        alert('BufferLoader: XHR error');
    }

    request.send();
}

BufferLoader.prototype.load = function() {
    for (var i = 0; i < this.urlList.length; ++i)
        this.loadBuffer(this.urlList[i], i);
}

有可能是一个在此解决方案<一个href=\"http://stackoverflow.com/questions/18488264/how-do-i-convert-an-array-of-audio-data-into-a-wav-file\">How做我转换音频数据数组成wav文件?的据我可以做出来他们是交错两个缓冲区,并编码为一个.wav,但我无法揣摩出他们正在将它们写入一个文件(保存新的wav文件)任何想法?

There might be something in this solution How do I convert an array of audio data into a wav file? As far as I can make out they are interleaving the two buffers and encoding them as a .wav but I can't figure out where they are writing them to a file (saving the new wav file) any ideas?

下面的答案 - 并不能真正帮助因为我使用的网络音频API(JavaScript)的不IOS

The answer below - doesn't really help as I'm using Web Audio Api (javascript) not IOS

推荐答案

解决的办法是使用offlineAudioContext

The solution was to use the offlineAudioContext

的步骤是:
1.使用BufferLoader装入两个文件缓冲器
2.创建OfflineAudioContext
3.两个缓冲器连接到OfflineAudioContext
4.启动两个缓冲区
5.使用离线startRendering功能
6.设置offfline.oncomplete函数来获得对renderedBuffer的句柄。

The steps were: 1. Load the two files as buffers using the BufferLoader 2. Create an OfflineAudioContext 3. connect the two buffers to the OfflineAudioContext 4. start the two buffers 5. use the offline startRendering function 6. Set the offfline.oncomplete function to get a handle on the renderedBuffer.

这里的code:

offline = new webkitOfflineAudioContext(2, voice.buffer.length, 44100);
vocalSource = offline.createBufferSource();
vocalSource.buffer = bufferList[0];
vocalSource.connect(offline.destination);

backing = offline.createBufferSource();
backing.buffer = bufferList[1];
backing.connect(offline.destination);

vocalSource.start(0);
backing.start(0);

offline.oncomplete = function(ev){
    alert(bufferList);
    playBackMix(ev);
    console.log(ev.renderedBuffer);
    sendWaveToPost(ev);
}
offline.startRendering();

这篇关于PhoneGap的混合音频文件的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆