将重叠的音频文件流式传输到Chromecast Audio [英] Stream overlapping audio files to Chromecast Audio

查看:254
本文介绍了将重叠的音频文件流式传输到Chromecast Audio的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我想流式传输多个重叠的音频文件(某些在某些随机时间播放的声音效果).这样产生的音频流永远不会以完全相同的方式重复.有些音频文件正在循环播放,有些则在特定时间播放.我猜大概是一种实时流插入会很好.

I would like to stream multiple overlapping audio files (some sound effects that play at certain random times). So kind of a generated audio stream that will NEVER repeat exactly the same way. Some audio files are looping, some are at specific times. Probably kind of realtime stream insertion would be good I guess.

编写这种服务器软件的最佳方法是什么?应该使用什么协议进行流传输(我更喜欢HTTP,而不是HTTP).我可能想公开每种配置的网址(音效的音轨和时序).

What is the best way to write such a server software? What protocols should be used for streaming that (I prefer over HTTP). I would probably want to expose an url for each configuration (tracks & timing of sound effects).

是否有任何指向代码/库的指针?最好使用任何语言,例如java/kotlin/go/rust/ruby​​/python/node/...

Any pointers to code/libraries? Best if in any language like java/kotlin/go/rust/ruby/python/node/...

网址:https://server.org/audio?file1=loop&file2=every30s&file2_volume=0.5

响应:Audio stream(在投放设备上播放)

Response: Audio stream (that plays on cast devices)

Stream循环file1.每隔30秒播放一次的File2的音量为50%(覆盖播放速度为100%的file1).文件1的长度约为10m9s.因此,这种组合永远不会真正重复.因此,我们不能只提供预生成的mp3文件.

Stream loops the file1. At every 30s it plays file2 with 50% volume (overlayed over file1 which plays at 100%). File 1 is like 10m9s long. So the the combination never repeats really. So we can not just provide a pregenerated mp3 file.

我目前有一个android应用程序,可随机播放不同的音频文件.有些在循环播放,有些每x秒播放一次.有时一次多达10个.

I currently have an android application that plays different audio files at random. Some are looping, some play every x seconds. Sometimes as many as 10 at the same time.

现在,我想添加对chromecast/chromecast音频/google home/...的支持.我想最好是拥有一个流式传输服务器.每个用户在播放时都会拥有自己的流.无需让多个用户收听同一流(即使也可能会支持它).

Now I would like to add support for chromecast/chromecast audio/google home/... . I guess best would be to have a server that streams that. Every user would have his/her own stream when playing. No need for having multiple users listen to the same stream (even though it probably would be supported as well).

服务器将基本上读取URL,获取配置,然后以音频流进行响应.服务器打开一个(或多个)音频文件,然后将其合并/叠加成单个流.在某些时候,这些音频文件会循环播放.其他一些音频文件在特定时间打开,并添加/叠加到流中.播放的每个音频文件都以不同的音量播放(有些声音更大,有些声音更安静).问题是如何制作这样的音频流,以及如何实时添加不同的文件.

The server would basically read the url, get the configuration and then respond with a audio stream. The server opens one (or multiple audio files) that it then combines/overlays into a single stream. At certain times those audio files are looped. Some other audio files are opened at specific times and added/overlayed to the stream. Each audio file played is played at a different volume level (some are louder, some are quieter). The question is how to make such an audio stream and how to add the different files in in realtime.

推荐答案

因此,您的问题有两个部分

So there are two parts to your problem

  • 使用不同的选项混合音频
  • 流混合来自Web服务器的响应

我可以在后面的部分中为您提供帮助,您需要自己弄清楚第一部分

I can help you with the later part and you need to figure out the first part yourself

下面是一个示例nodejs脚本.运行它创建一个目录并运行

Below is a sample nodejs script. Run it create a directory and run

npm init
npm install fluent-ffmpeg express

然后保存以下文件

server.js

var ff = require('fluent-ffmpeg');
var express = require('express')
var app = express()

app.get('/merged', (req, res) => {
    res.contentType('mp3');
    // res.header("Transfer-Encoding", "chunked")
    command = ff()
        .input("1.mp3")
        .input("2.mp3")
        .input("3.mp3")
        .input("4.mp3")
        .complexFilter(`[1]adelay=2|5[b];
        [2]adelay=10|12[c];
        [3]adelay=4|6[d];
        [0][b][c][d]amix=4`)
        .outputOptions(["-f", "flv"])
    ;

    command.on('end', () => {
        console.log('Processed finished')
        // res.end()
    })
    command.pipe(res, {end: true});
    command.on('error', function(err, stdout, stderr) {
        console.log('ffmpeg stdout: ' + stdout);
        console.log('ffmpeg stderr: ' + stderr);
    });

})

app.listen(9090)

使用以下命令运行它

node server.js

现在在VLC中打开http://localhost:9090/merged

现在,根据您的要求,以下部分将更改

Now for your requirement the below part will change

        .complexFilter(`[1]adelay=2|5[b];
        [2]adelay=10|12[c];
        [3]adelay=4|6[d];
        [0][b][c][d]amix=4`)

但是我不是ffmpeg专家来指导您进行该领域的探索.也许这需要另一个问题或从许多现有的SO线程中抢占先机

But I am no ffmpeg expert to guide you around that area. Perhaps that calls for another question or taking lead from lot of existing SO threads

ffmpeg-如何合并时间偏移到视频中的多个音频?

如何合并两个音频文件,同时使用ffmpeg保留正确的时间

ffmpeg在特定时间混合音频

https://superuser.com/questions/850527/combine在三个特定时间之间使用ffmpeg的视频

这篇关于将重叠的音频文件流式传输到Chromecast Audio的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆