如何播放用WebRTC录制的音频流块? [英] How to play audio stream chunks recorded with WebRTC?
问题描述
我正在尝试创建一个实验性应用程序,实时将音频从客户端1
流式传输到客户端2
。
I'm trying to create an experimental application that streams audio in real time from client 1
to client 2
.
因此,在关于同一主题的一些教程和问题之后,我使用了 WebRTC 和 binaryjs 。到目前为止,这是我得到的
So following some tutorials and questions about the same subject, I used WebRTC and binaryjs. So far this is what I get
1- 客户端1
和客户端2
已连接到BinaryJS以发送/接收数据块。
1- Client 1
and Client 2
have connected to BinaryJS to send/receive data chunks.
2- 客户端1
使用过WebRTC录制音频并逐渐发送到BinaryJS
2- Client 1
used WebRTC to record audio and gradually send it to BinaryJS
3- 客户端2
接收块并尝试播放它们。
3- Client 2
receives the chunks and try to play them.
我在最后一部分收到错误。这是我收到的错误消息:
Well I'm getting an error in the last part. This is the error message I get:
未捕获RangeError:来源太大
Uncaught RangeError: Source is too large
at Float32Array.set(native)
at Float32Array.set (native)
这是代码:
客户端1
var WSClient;
var AudioStream;
function load(){
var session = {
audio: true,
video: false
};
var recordRTC = null;
navigator.getUserMedia(session, startRecording, onError);
WSClient = new BinaryClient('ws://localhost:9001');
WSClient.on('open',function(){
console.log('client opened')
AudioStream = WSClient.createStream();
})
}
function startRecording(stream){
var context = new AudioContext();
var audio_input = context.createMediaStreamSource(stream);
var buffer_size = 2048;
var recorder = context.createScriptProcessor(buffer_size, 1, 1);
recorder.onaudioprocess = function(e){
console.log('chunk')
var left = e.inputBuffer.getChannelData(0);
AudioStream.write(left);
};
audio_input.connect(recorder);
recorder.connect(context.destination);
}
客户2
var WSClient;
var audioContext;
var sourceNode;
function load(){
audioContext = new AudioContext();
sourceNode = audioContext.createBufferSource();
sourceNode.connect(audioContext.destination);
sourceNode.start(0);
WSClient = new BinaryClient('ws://localhost:9001');
WSClient.on('open',function(){
console.log('client opened');
});
WSClient.on('stream', function(stream, meta){
// collect stream data
stream.on('data', function(data){
console.log('received chunk')
var integers = new Int16Array(data);
var audioBuffer = audioContext.createBuffer(1, 2048, 4410);
audioBuffer.getChannelData(0).set(integers); //appearently this is where the error occurs
sourceNode.buffer = audioBuffer;
});
});
}
服务器
var wav = require('wav');
var binaryjs = require('binaryjs');
var binaryjs_server = binaryjs.BinaryServer;
var server = binaryjs_server({port: 9001});
server.on('connection', function(client){
console.log('server connected');
var file_writter = null;
client.on('stream', function(stream, meta){
console.log('streaming', server.clients)
//send to other clients
for(var id in server.clients){
if(server.clients.hasOwnProperty(id)){
var otherClient = server.clients[id];
if(otherClient != client){
var send = otherClient.createStream(meta);
stream.pipe(send);
}
}
}
});
client.on('close', function(stream){
console.log('client closed')
if(file_writter != null) file_writter.end();
});
});
此处发生错误:
audioBuffer.getChannelData(0).set(integers);
所以我有两个问题:
是否可以发送我在客户端1
中捕获的块,然后在客户端2
中重现它们?
Is it possible to send the chunks I captured in client 1
and then reproduce them in client 2
?
我遇到错误的处理是什么?
What is the deal with the error I'm having?
谢谢大家!
@edit 1
由于我从其他问题中获取代码片段,我仍然试图理解它。我在客户端2
代码中注释了创建 Int16Array
的代码,现在我得到了一个不同的错误(但我不知道)知道哪个版本的代码更正确):
Since i'm getting code snippets from other questions I'm still trying to understand it. I commented the line in client 2
code that creates an Int16Array
and I now get a different error (but I don't know which version of the code is more correct):
未捕获DOMException:无法在'AudioBufferSourceNode'上设置'buffer'属性:在设置缓冲区之后无法设置缓冲区
Uncaught DOMException: Failed to set the 'buffer' property on 'AudioBufferSourceNode': Cannot set buffer after it has been already been set
可能是因为我每次获取新数据块时都会设置缓冲区。
Probably because I'm setting it everytime I get a new chunk of data.
推荐答案
关于 AudioBufferSourceNode
的DOMException意味着你需要创建一个新的 AudioBufferSourceNode
为您正在创建的每个新 Audiobuffer
。所以类似
The DOMException about AudioBufferSourceNode
means you need to create a new AudioBufferSourceNode
for every new Audiobuffer
that you're creating. So something like
sourceNode = new AudioBufferSourceNode(audioContext,{buffer:audioBuffer})
AudioBuffer
有 Float32Array
s。您需要将
Int16Array
转换为 Float32Array
,然后再将其转换为
AudioBuffer
。可能足以将所有东西除以32768。
And an AudioBuffer
has Float32Array
s. You need to convert your
Int16Array
to a Float32Array
before assigning it to an
AudioBuffer
. Probably good enough to divide everything by 32768.
这篇关于如何播放用WebRTC录制的音频流块?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!