Node.js关于缓冲区,流,管道,axios,createWriteStream和createReadStream的困惑 [英] Node.js confusion about buffers, streams, pipe, axios, createWriteStream, and createReadStream

查看:211
本文介绍了Node.js关于缓冲区,流,管道,axios,createWriteStream和createReadStream的困惑的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我正在尝试使用Firebase Cloud Functions(节点8)从Firebase Storage下载音频文件(约250KB)并将其发送到IBM Cloud Speech-to-Text.我正在使用 axios 将HTTP GET请求发送到下载URL. axios 返回一个流,因此我使用 fs.createReadStream(response)将文件流式传输到IBM Cloud Speech-to-Text.我没有收到错误消息,但是没有任何内容发送到IBM Cloud Speech-to-Text.

I'm trying to download an audio file (about 250KB) from Firebase Storage and send it to IBM Cloud Speech-to-Text, using Firebase Cloud Functions (Node 8). I'm using axios to send the HTTP GET request to the download URL. axios returns a stream so I use fs.createReadStream(response) to stream the file to IBM Cloud Speech-to-Text. I don't get an error message, rather nothing is sent to IBM Cloud Speech-to-Text.

exports.IBM_Speech_to_Text = functions.firestore.document('Users/{userID}/Pronunciation_Test/downloadURL').onUpdate((change, context) => { // this is the Firebase Cloud Functions trigger

    const fs = require('fs');
    const SpeechToTextV1 = require('ibm-watson/speech-to-text/v1');
    const { IamAuthenticator } = require('ibm-watson/auth');

    const speechToText = new SpeechToTextV1({
      authenticator: new IamAuthenticator({
        apikey: 'my-api-key',
      }),
      url: 'https://api.us-south.speech-to-text.watson.cloud.ibm.com/instances/01010101',
    });

    const axios = require('axios');

    return axios({
      method: 'get',
      url: 'https://firebasestorage.googleapis.com/v0/b/languagetwo-cd94d.appspot.com/o/Users%2FbcmrZDO0X5N6kB38MqhUJZ11OzA3%2Faudio-file.flac?alt=media&token=871b9401-c6af-4c38-aaf3-889bb5952d0e', // the download URL for the audio file
      responseType: 'stream' // is this creating a stream?
    })
    .then(function (response) {
      var params = {
        audio: fs.createReadStream(response),
        contentType: 'audio/flac',
        wordAlternativesThreshold: 0.9,
        keywords: ['colorado', 'tornado', 'tornadoes'],
        keywordsThreshold: 0.5,
      };
      speechToText.recognize(params)
      .then(results => {
        console.log(JSON.stringify(results, null, 2)); // undefined
      })
      .catch(function (error) {
        console.log(error.error);
      });
    })
    .catch(function (error) {
      console.log(error.error);
    });
  });

问题是来自 axios 的响应不会传递给 fs.createReadStream().

The problem is that the response from axios isn't going to fs.createReadStream().

关于 fs.createReadStream(path)文档>说 path< string>|< Buffer>|< URL> . response 都不是.我是否需要将 response 写入缓冲区?我试过了:

The documentation for fs.createReadStream(path) says path <string> | <Buffer> | <URL>. response is none of those. Do I need to write response to a buffer? I tried this:

const responseBuffer = Buffer.from(response.data.pipe(fs.createWriteStream(responseBuffer)));
;
var params = {
   audio: fs.createReadStream(responseBuffer),

但是那也不起作用.第一行很臭...

but that didn't work either. That first line is smelly...

还是应该使用流?

exports.IBM_Speech_to_Text = functions.firestore.document('Users/{userID}/Pronunciation_Test/downloadURL').onUpdate((change, context) => {
      const fs = require('fs');
      const SpeechToTextV1 = require('ibm-watson/speech-to-text/v1');
      const { IamAuthenticator } = require('ibm-watson/auth');

      const speechToText = new SpeechToTextV1({
        authenticator: new IamAuthenticator({
          apikey: 'my-api-key',
        }),
        url: 'https://api.us-south.speech-to-text.watson.cloud.ibm.com/instances/01010101',
      });

      const axios = require('axios');
      const path = require('path');

      return axios({
        method: 'get',
        url: 'https://firebasestorage.googleapis.com/v0/b/languagetwo-cd94d.appspot.com/o/Users%2FbcmrZDO0X5N6kB38MqhUJZ11OzA3%2Faudio-file.flac?alt=media&token=871b9401-c6af-4c38-aaf3-889bb5952d0e',
        responseType: 'stream'
      })
      .then(function (response) {
          response.data.pipe(createWriteStream(audiofile));
          var params = {
            audio: fs.createReadStream(audiofile),
            contentType: 'audio/flac',
            wordAlternativesThreshold: 0.9,
            keywords: ['colorado', 'tornado', 'tornadoes'],
            keywordsThreshold: 0.5,
          };
          speechToText.recognize(params)
          .then(results => {
            console.log(JSON.stringify(results, null, 2));
          })
          .catch(function (error) {
            console.log(error.error);
          });
        })
        .catch(function (error) {
          console.log(error.error);
        });
    });

那也不行.

推荐答案

问题是我应该从 axios 传递 response ,而应该将其作为 response.data .我本来可以在五分钟内与Postman一起解决这个问题,但Postman不适用于流.

The problem was that I was passing response from axios when it should have been response.data. I would have figured this out in five minutes with Postman, but Postman doesn't work with streams.

另一个问题是jfriend00所说的,不需要 fs.createReadStream .正确的代码是:

The other problem was as jfriend00 said, fs.createReadStream was unnecessary. The correct code is:

audio: response.data,

不需要这些行:

const fs = require('fs');
response.data.pipe(createWriteStream(audiofile));

这篇关于Node.js关于缓冲区,流,管道,axios,createWriteStream和createReadStream的困惑的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆