正确处理麦克风音频的 React Hooks [英] Correct handling of React Hooks for microphone audio

查看:126
本文介绍了正确处理麦克风音频的 React Hooks的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我正在尝试编写一个 React Hook 来处理流式音频到使用 Meyda 分析的 AudioContext.

const getMedia = async() =>{尝试 {返回等待 navigator.mediaDevices.getUserMedia({音频:真实,视频:假的,})} 抓住(错误){控制台日志('错误:',错误)}}const useMeydaAnalyser = () =>{const [分析器,setAnalyser] = useState(null)const [running, setRunning] = useState(false)const [功能,setFeatures] = useState(null)useEffect(() => {const audioContext = new AudioContext()让新的分析器getMedia().then(stream => {如果(audioContext.state === '关闭'){返回}const source = audioContext.createMediaStreamSource(stream)newAnalyser = Meyda.createMeydaAnalyzer({音频上下文:音频上下文,来源:来源,缓冲区大小:1024,特征提取器:['amplitudeSpectrum','mfcc','rms'],回调:功能 =>{控制台日志(功能)设置功能(功能)},})设置分析器(新分析器)})返回 () =>{如果(新分析器){newAnalyser.stop()}如果(音频上下文){audioContext.close()}}}, [])useEffect(() => {如果(分析器){如果(运行){分析器.start()} 别的 {analyzer.stop()}}}, [运行, 分析器])返回 [运行,设置运行,功能]}

I'm trying to write a React Hook to handle streaming audio to an AudioContext which is analysed with Meyda.

https://meyda.js.org/

I have managed to get the stream working and am able to pull out the data I want. However, I'm having trouble de-initialising the audio.

If someone can offer me some guidance on setting up this hook correctly I'd be most grateful.

I'm currently receiving the following error when I navigate away from a page using these hooks:

Warning: Can't perform a React state update on an unmounted component. This is a no-op, but it indicates a memory leak in your application. To fix, cancel all subscriptions and asynchronous tasks in a useEffect cleanup function.

I have attempted to add a cleanup function to the end of my hook, but my attempts often ended with the audio cutting off immediately or any number of other weird bugs.

Microphone Audio Hook with Meyda Analyser

export const useMeydaAnalyser = () => {

    const [running, setRunning] = useState(false);
    const [features, setFeatures] = useState(null);
    const featuresRef = useRef(features);
    const audioContext = useRef(new AudioContext());

    const getMedia = async() => {
        try {
            return await navigator
                .mediaDevices
                .getUserMedia({audio: true, video: false});
        } catch(err) {
            console.log('Error:', err);
        }
    };

    useEffect(
        () => {
            const audio = audioContext.current;
            let unmounted = false;
            if(!running) {
                getMedia().then(stream => {
                    if (unmounted) return;
                    setRunning(true);
                    const source = audio.createMediaStreamSource(stream);
                    const analyser = Meyda.createMeydaAnalyzer({
                        audioContext: audio,
                        source: source,
                        bufferSize: 1024,
                        featureExtractors: [
                            'amplitudeSpectrum',
                            'mfcc',
                            'rms',
                        ],
                        callback: nextFeatures => {
                            if(!isEqual(featuresRef.current, nextFeatures)) {
                                setFeatures(nextFeatures);
                            }
                        },
                    });
                    analyser.start();
                });
            }
            return () => {
                unmounted = true;
            }
        },
        [running, audioContext],
    );

    useEffect(
        () => {
            featuresRef.current = features;
        },
        [features],
    );

    return features;
};

Audio View

import React, {useEffect} from 'react';
import { useMeydaAnalyser } from '../hooks/use-meyda-audio';

const AudioViewDemo = () => {
    const audioContext = new AudioContext();
    const features = useMeydaAnalyser(audioContext);

    useEffect(() => {
        // Todo: Handle Audio features
        console.log(features);
        // setAudioData(features);
    }, [features]);

    return (
        <div>
            RMS: {features && features.rms}
        </div>
    );
};

export default AudioViewDemo;

解决方案

The error should be caused by not closing AudioContext. You need to close AudioContext in the cleanup functions.

Note that before using AudioContext, determine if the state is off, because getMedia is asynchronous, so if the component is unloaded soon after loading, AudioContext is turned off when it is used.

const getMedia = async () => {
  try {
    return await navigator.mediaDevices.getUserMedia({
      audio: true,
      video: false,
    })
  } catch (err) {
    console.log('Error:', err)
  }
}

const useMeydaAnalyser = () => {
  const [analyser, setAnalyser] = useState(null)
  const [running, setRunning] = useState(false)
  const [features, setFeatures] = useState(null)

  useEffect(() => {
    const audioContext = new AudioContext()

    let newAnalyser
    getMedia().then(stream => {
      if (audioContext.state === 'closed') {
        return
      }
      const source = audioContext.createMediaStreamSource(stream)
      newAnalyser = Meyda.createMeydaAnalyzer({
        audioContext: audioContext,
        source: source,
        bufferSize: 1024,
        featureExtractors: ['amplitudeSpectrum', 'mfcc', 'rms'],
        callback: features => {
          console.log(features)
          setFeatures(features)
        },
      })
      setAnalyser(newAnalyser)
    })
    return () => {
      if (newAnalyser) {
        newAnalyser.stop()
      }
      if (audioContext) {
        audioContext.close()
      }
    }
  }, [])

  useEffect(() => {
    if (analyser) {
      if (running) {
        analyser.start()
      } else {
        analyser.stop()
      }
    }
  }, [running, analyser])

  return [running, setRunning, features]
}

这篇关于正确处理麦克风音频的 React Hooks的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆