使用sip.js从SIP呼叫录制麦克风和音频 [英] Record mic and audio from SIP call using sip.js

查看:1201
本文介绍了使用sip.js从SIP呼叫录制麦克风和音频的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

晚上好堆栈溢出!
我真的需要帮助我的一个项目,我正在使用sip.js和VoIP来拨打电话号码。

Good evening Stack Overflow! I really need help for a project of mine where I'm using sip.js and a VoIP to make real calls to a phone number.

目标

我想允许用户录制音频和麦克风并将数据保存在服务器上(以base64编码或文件形式) 。所以我在谈话之后可以再次听到对话并将其用于我的目的(员工培训)。

I want to allow the user to record the audio and microphone and save the data on a server (in base64 encoding or as a file). So I after the conversation can hear the conversation again and use it for what ever my purpose (employee training) was.

问题

我听不到发言者的声音,来自-HTML标签(使用sip.js插件)。截至目前,我还没有找到任何方法通过此音频标签成功保存声音流。

I can't get the sound of the person speaking, which comes through and -HTML tag (working with the sip.js plugin). As of now I haven't found any way to successfully save the sound streaming through this audio tag.

到目前为止我做了什么

我已成功找出如何使用名为 AudioRecorder ,它允许我通过麦克风录制音频并保存。我略微更改了代码,因此将其保存为base64编码。这一切都按预期工作,虽然我只收到我自己的声音,而不是我正在与之交谈的人。

I've successfully figured out how to record the audio of the microphone using a plugin called AudioRecorder which allows me to record the audio through the microphone and saving it. I slightly changed the code so it got saved encoded as base64. This all work as expected, though I only get the audio of my own voice, and not the person I'm talking with.

因为我成功录制了音频我自己的声音我查看了AudioRecorder插件并试图反转插件以从音频标签录制。我在AudioRecorder中找到了createMediaStreamSource函数,我想用-tag工作,因为我怀疑,因为它中的-tag不是一个流(我明白了)。

Because I succeed to record the audio of my own voice I looked into the AudioRecorder plugin and tried to reverse the plugin to record from a audio tag. I found the "createMediaStreamSource" function inside AudioRecorder which I wanted to work with the -tag which did not work (as I suspected, because the -tag in it self isn't a stream (of which i understand).

代码

我基本上使用sip.js插件来建立对使用下面的代码编写一个电话号码(只是使用一个例子,匹配我的代码,因为我的原始代码包含一些不需要在这里显示的附加值):

I'm basically using the sip.js plugin to establish a call to a phone number by using below code (just using an example, matching my code, because my raw code contains some added values which doesn't need to be showed here):

// Create a user agent called bob, connect, and register to receive invitations.
var userAgent = new SIP.UA({
  uri: 'bob@example.com',
  wsServers: ['wss://sip-ws.example.com'],
  register: true
});
var options = { media: { constraints: { audio: true, video: false }, render: { remote: document.getElementById("audio") } } };

然后我使用内置的邀请功能来调用一个电话号码音频和麦克风现已启动并运行。

Then i use the build in invite function to call a phonenumber, which does the rest. Audio and microphone is now up and running.

userAgent.invite("+4512345678", options);

我现在可以和我最好的朋友鲍勃谈谈。但是我现在不能录制除了我自己的声音以外的其他声音。

I can now talk with my new best friend Bob. But I can't record other than my own sound as of now.

接下来是什么?

我真的想要一些帮助来理解我如何录制Bob的声音并存储它,首选与我自己的声音在同一个文件中。如果我必须记录两个单独的文件并将它们同步播放,我不会介意,但如果愿意的话,我会不介意。

I would really like some help to understand how I can record the sound of "Bob" and store it, preferred in the same file as my own voice. If I have to record two separately files and play them synced, I won't mind, but else if preferred.

我知道这可能只是一个求助的电话显示我自己试图做的任何真实代码,但我不得不承认我只是摆弄了几个小时没有任何好结果的代码,现在我正在大声寻求帮助。

I know this might just be a call for help without showing anything real code of what I've tried to do it myself, but I have to admit I just fiddled with the code for hours without any good results and now I'm screaming for help.

提前感谢大家,并对语言的错误语法和错误使用表示歉意。

Thank your all in advance and sorry for the bad grammar and (mis)use of language.

推荐答案

好的,所以我终于找到了我的问题的解决方案,我想在这里分享。

Okay, so I after finally found a solution to my problem, which I though i wanted to share here.

我为解决问题所做的是在麦克风的普通录音脚本中添加一行简单的代码。录制麦克风音频的脚本是:

What I did to solve the problem was to add ONE simple line of code to the "normal" recording script of a microphone. The script to record mic audio is:

window.AudioContext = window.AudioContext || window.webkitAudioContext;

var audioGlobalContext = new AudioContext();
var audioOutputAnalyser
var inputPoint = null,
    audioRecorder = null;
var recording = false;

// Controls the start and stop of recording
function toggleRecording( e ) {
    if (recording == true) {
        recording = false;
        audioRecorder.stop();
        audioRecorder.getBuffers( gotBuffers );
        console.log("Stop recording");
    } else {
        if (!audioRecorder)
            return;
        recording = true;
        audioRecorder.clear();
        audioRecorder.record();
        console.log("Start recording");
    }
}

function gotBuffers(buffers) {
    audioRecorder.exportWAV(doneEncoding);
}

function doneEncoding(blob) {
    document.getElementById("outputAudio").pause();
    Recorder.setupDownload(blob);
}

function gotAudioMicrophoneStream(stream) {
    var source = audioGlobalContext.createMediaStreamSource(stream);
    source.connect(inputPoint);
}

function initAudio() {
        if (!navigator.getUserMedia)
            navigator.getUserMedia = navigator.webkitGetUserMedia || navigator.mozGetUserMedia;
        if (!navigator.cancelAnimationFrame)
            navigator.cancelAnimationFrame = navigator.webkitCancelAnimationFrame || navigator.mozCancelAnimationFrame;
        if (!navigator.requestAnimationFrame)
            navigator.requestAnimationFrame = navigator.webkitRequestAnimationFrame || navigator.mozRequestAnimationFrame;

    inputPoint = audioGlobalContext.createGain();

    navigator.getUserMedia({
        "audio": {
            "mandatory": {
                "googEchoCancellation": "true",
                "googAutoGainControl": "false",
                "googNoiseSuppression": "true",
                "googHighpassFilter": "false"
            },
            "optional": []
        },
    }, gotAudioMicrophoneStream, function(e) {
        alert('Error recording microphone');
        console.log(e);
    });

    var analyserNode = audioGlobalContext.createAnalyser();
    analyserNode.fftSize = 2048;
    inputPoint.connect(analyserNode);
    var zeroGain = audioGlobalContext.createGain();
    zeroGain.gain.value = 0.0;
    inputPoint.connect(zeroGain);
    zeroGain.connect(audioGlobalContext.destination);

    audioRecorder = new Recorder(inputPoint);
}

window.addEventListener('load', initAudio );

我想要将音频标签声音转换为音频源的功能是 createMediaElementSource()所以我做的是添加这个函数:

The function I was looking for to convert the Audio-tag sound into an Audio Source was createMediaElementSource() so what I did was adding this function:

function gotAudioOutputStream() {
    var source = audioGlobalContext.createMediaElementSource(document.getElementById("outputAudio"));
    source.connect(inputPoint);
    source.connect(audioGlobalContext.destination);
}

在navigator.getUserMedia之后的initAudio()函数中添加了一个调用功能。完成的代码(使用HTML)看起来像这样

And in the initAudio() function just after navigator.getUserMedia added a call to the function. To the finished code (with HTML) would look like this

window.AudioContext = window.AudioContext || window.webkitAudioContext;

var audioGlobalContext = new AudioContext();
var audioOutputAnalyser
var inputPoint = null,
    audioRecorder = null;
var recording = false;

// Controls the start and stop of recording
function toggleRecording( e ) {
    if (recording == true) {
        recording = false;
        audioRecorder.stop();
        audioRecorder.getBuffers( gotBuffers );
        console.log("Stop recording");
    } else {
        if (!audioRecorder)
            return;
        recording = true;
        audioRecorder.clear();
        audioRecorder.record();
        console.log("Start recording");
    }
}

function gotBuffers(buffers) {
    audioRecorder.exportWAV(doneEncoding);
}

function doneEncoding(blob) {
    document.getElementById("outputAudio").pause();
    Recorder.setupDownload(blob);
}

function gotAudioMicrophoneStream(stream) {
    var source = audioGlobalContext.createMediaStreamSource(stream);
    source.connect(inputPoint);
}

function gotAudioOutputStream() {
    var source = audioGlobalContext.createMediaElementSource(document.getElementById("outputAudio"));
    source.connect(inputPoint);
    source.connect(audioGlobalContext.destination);
}

function initAudio() {
        if (!navigator.getUserMedia)
            navigator.getUserMedia = navigator.webkitGetUserMedia || navigator.mozGetUserMedia;
        if (!navigator.cancelAnimationFrame)
            navigator.cancelAnimationFrame = navigator.webkitCancelAnimationFrame || navigator.mozCancelAnimationFrame;
        if (!navigator.requestAnimationFrame)
            navigator.requestAnimationFrame = navigator.webkitRequestAnimationFrame || navigator.mozRequestAnimationFrame;

    inputPoint = audioGlobalContext.createGain();

    navigator.getUserMedia({
        "audio": {
            "mandatory": {
                "googEchoCancellation": "true",
                "googAutoGainControl": "false",
                "googNoiseSuppression": "true",
                "googHighpassFilter": "false"
            },
            "optional": []
        },
    }, gotAudioMicrophoneStream, function(e) {
        alert('Error recording microphone');
        console.log(e);
    });

    gotAudioOutputStream();

    var analyserNode = audioGlobalContext.createAnalyser();
    analyserNode.fftSize = 2048;
    inputPoint.connect(analyserNode);
    var zeroGain = audioGlobalContext.createGain();
    zeroGain.gain.value = 0.0;
    inputPoint.connect(zeroGain);
    zeroGain.connect(audioGlobalContext.destination);

    audioRecorder = new Recorder(inputPoint);
}

window.addEventListener('load', initAudio );

<!doctype html>
<html>
<head>
    <meta name="viewport" content="width=device-width,initial-scale=1">
    <title>Audio Recorder</title>
    <script src="assets/js/AudioRecorder/js/recorderjs/recorder.js"></script>
    <script src="assets/js/AudioRecorder/js/main.js"></script>
</head>
<body>
    <audio id="outputAudio" autoplay="true" src="test.mp3" type="audio/mpeg"></audio>
    <audio id="playBack"></audio>
    <div id="controls">
        <img id="record" src="assets/js/AudioRecorder/img/mic128.png" onclick="toggleRecording(this);">
    </div>
</body>
</html>

这会记录您的声音和来自音频元素标签的声音。简单。希望每个与我有同样问题的人在Audio API上回放你的头脑会发现这很有用。

This records your voice and the sound coming from the audio element tag. Simple. Hope everyone out there who had the same problem as me to "rewind" your head around Audio API will find this helpful.

上面显示的代码片段需要Recorder.js工作。

This code snippets shown above require Recorder.js to work.

这篇关于使用sip.js从SIP呼叫录制麦克风和音频的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆