渲染一些声音数据到一个新的声音数据? [英] Rendering some sound data into one new sound data?

查看:171
本文介绍了渲染一些声音数据到一个新的声音数据?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我创建将读取包含声音银行和偏移时,声音必须发挥了独特格式的应用程序。

I'm creating an application that will read a unique format that contains sound "bank" and offsets when the sound must be played.

想象类似的东西..

音库(在左侧和文件名在右边ID)

Sound bank: (ID on the left-hand and file name on the right-hand side)

0 kick.wav
1 hit.wav
2 flute.wav

和偏移:(时间上的左侧和声音ID的右手侧毫秒)

And the offsets: (Time in ms on the left-hand and sound ID on the right-hand side)

1000 0
2000 1
3000 2

和应用程序将产生一个新的声音文件(即WAV,供以后转换为其他格式),发挥在第三秒在第一秒的一脚,第二次在秒的命中,和长笛。

And the application will generate a new sound file (ie. wav, for later conversion to other formats) that plays a kick at first sec, a hit at second sec, and flute at third sec.

我完全有哪里开始不知道。

I completely have no idea on where to begin.

我通常使用 FMOD 的音频播放,但从来没有这样的事情了。

I usually use FMOD for audio playbacks, but never did something like this before.

我用C ++和wxWidgets的一个MSVC ++防爆preSS Edition环境和LGPL库就可以了。

I'm using C++ and wxWidgets on a MSVC++ Express Edition environment, and LGPL libraries would be fine.

推荐答案

如果我理解正确的话,你想从一个音库混合wav文件生成一个新的波形文件。你可能不需要一个健全的API在所有的这一点,特别是如果你所有的输入wav文件都在相同的格式。

If I understand correctly, you want to generate a new wave file by mixing wavs from a soundbank. You may not need a sound API at all for this, especially if all your input wavs are in the same format.

简单地加载每个wav文件到缓冲区中。对于采样率* secondsUntilStartTime 样品,在ActiveList每个缓冲区,添加缓冲[bufferIdx ++] 到输出缓冲区。如果bufferIdx ==缓冲区长度,请从ActiveList该缓冲区。在开始时间,添加下一个缓冲区ActiveList,并重复。

Simply load each wav file into a buffer. For SampleRate*secondsUntilStartTime samples, for each buffer in the ActiveList, add buffer[bufferIdx++] into the output buffer. If bufferIdx == bufferLen, remove this buffer from the ActiveList. At StartTime, add the next buffer the ActiveList, and repeat.

如果FMOD支持输出到文件,而不是声音硬件,你可以做同样的事情与流API。只要保持跟踪样本经过在StreamCallback,并开始,只要你达到他们开始偏移新文件混合。

If FMOD supports output to a file instead of the sound hardware, you can do this same thing with the streaming API. Just keep track of elapsed samples in the StreamCallback, and start mixing in new files whenever you reach their start offsets.

这篇关于渲染一些声音数据到一个新的声音数据?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆