WebRTC 和 Asp.NetCore [英] WebRTC and Asp.NetCore

查看:45
本文介绍了WebRTC 和 Asp.NetCore的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我想将音频流从我的 Angular Web 应用程序录制到我的 Asp.net Core Api.

I would like to record the Audio stream from my Angular Web App to my Asp.net Core Api.

我认为,使用 SignalR 及其 websockets 是一个很好的方法.

I think, using SignalR and its websockets it a good way to do that.

使用此打字稿代码,我可以获得 MediaStream:

With this typescript code, I m able to get a MediaStream:

import { HubConnection } from '@aspnet/signalr';

[...]

private stream: MediaStream;
private connection: webkitRTCPeerConnection;
@ViewChild('video') video;

[...]

navigator.mediaDevices.getUserMedia({ audio: true })
  .then(stream => {
    console.trace('Received local stream');
    this.video.srcObject = stream;
    this.stream = stream;

    var _hubConnection = new HubConnection('[MY_API_URL]/webrtc');
    this._hubConnection.send("SendStream", stream);
  })
  .catch(function (e) {
    console.error('getUserMedia() error: ' + e.message);
  });

我使用

  public class MyHub: Hub{
    public void SendStream(object o)
    {
    }
}

但是当我将 o 转换为 System.IO.Stream 时,我得到了一个空值.

But when I cast o to System.IO.Stream, I got a null.

在看WebRTC的文档的时候,看到了RTCPeerConnection的信息.IceConnection ...我需要那个吗?

When I read the documentation of WebRTC, I saw information about RTCPeerConnection. IceConnection ... Do I need that?

如何使用 SignalR 将音频从 WebClient 流式传输到 Asp.netCore API?文档?GitHub?

How can I stream the audio from a WebClient to Asp.netCore API using SignalR? Documentation? GitHub?

感谢您的帮助

推荐答案

我找到了访问麦克风流并将其传输到服务器的方法,代码如下:

I found the way to get access to the microphone stream and transmit it to the server, here is the code:

  private audioCtx: AudioContext;
  private stream: MediaStream;

  convertFloat32ToInt16(buffer:Float32Array) {
    let l = buffer.length;
    let buf = new Int16Array(l);
    while (l--) {
      buf[l] = Math.min(1, buffer[l]) * 0x7FFF;
    }
    return buf.buffer;
  }

  startRecording() {
    navigator.mediaDevices.getUserMedia({ audio: true })
      .then(stream => {
        this.audioCtx = new AudioContext();
        this.audioCtx.createMediaStreamSource(stream);
        this.audioCtx.onstatechange = (state) => { console.log(state); }

        var scriptNode = this.audioCtx.createScriptProcessor(4096, 1, 1);
        scriptNode.onaudioprocess = (audioProcessingEvent) => {
          var buffer = [];
          // The input buffer is the song we loaded earlier
          var inputBuffer = audioProcessingEvent.inputBuffer;
          // Loop through the output channels (in this case there is only one)
          for (var channel = 0; channel < inputBuffer.numberOfChannels; channel++) {

            console.log("inputBuffer:" + audioProcessingEvent.inputBuffer.getChannelData(channel));
            var chunk = audioProcessingEvent.inputBuffer.getChannelData(channel);
            //because  endianness does matter
            this.MySignalRService.send("SendStream", this.convertFloat32ToInt16(chunk));
          }
        }
        var source = this.audioCtx.createMediaStreamSource(stream);
        source.connect(scriptNode);
        scriptNode.connect(this.audioCtx.destination);


        this.stream = stream;
      })
      .catch(function (e) {
        console.error('getUserMedia() error: ' + e.message);
      });
  }

  stopRecording() {
    try {
      let stream = this.stream;
      stream.getAudioTracks().forEach(track => track.stop());
      stream.getVideoTracks().forEach(track => track.stop());
      this.audioCtx.close();
    }
    catch (error) {
      console.error('stopRecording() error: ' + error);
    }
  }

下一步是将我的 int32Array 转换为 wav 文件.

Next step will be to convert my int32Array to a wav file.

对我有帮助的来源:

注意:我没有添加关于如何配置SignalR的代码,这不是这里的目的.

Note: I didnt add the code on how to configure SignalR, it was not the purpose here.

这篇关于WebRTC 和 Asp.NetCore的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆