如何与VST(i)插件进行编程交互以合成音频? [英] How would I go about programmatically interacting with VST(i) Plugins to synthesize audio?

查看:135
本文介绍了如何与VST(i)插件进行编程交互以合成音频?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

例如,使用Tweakbench的VSTi Triforce .当加载到市场上的任何VST主机中时,它都允许主机向VSTi发送(可能是MIDI)信号.然后,VSTi将处理该信号并输出​​由VSTi中的软件乐器创建的合成音频.

Take, for example, the VSTi Triforce, by Tweakbench. When loaded up in any VST host on the market, it allows the host to send a (presumably MIDI) signal to the VSTi. The VSTi will then process that signal and output synthesized audio as created by a software instrument within the VSTi.

例如,将A4(我相信是MIDI音符)发送到VSTi,将使它合成中间C之上的A.它将音频数据发送回VST主机,然后该主机可以在我的扬声器上播放或将其保存为.wav或其他某种音频文件格式.

For example, sending an A4 (MIDI note, I believe) to the VSTi will cause it to synthesize the A above Middle C. It sends the audio data back to the VST Host, which then could either play it on my speakers or save it to .wav or some other audio file format.

假设我有 Triforce ,并且正在尝试使用我选择的语言编写一个程序,该程序可以通过发送要合成的A4便笺与VSTi进行交互,然后自动将其保存到文件在系统上?

Let's say I have Triforce, and am trying to write a program in my language of choice that could interact with the VSTi by sending in an A4 note to be synthesized, and automatically saving it to a file on the system?

最终,我希望能够解析一个完整的单轨MIDI文件(使用为此目的已经建立的稳定库),并将其发送到VSTi进行渲染"/合成为音频文件

Eventually, I'd like to be able to parse an entire one-track MIDI file (using established, stable libraries already available for this purpose) and send it to the VSTi to "render"/synthesize it into an audio file.

我将如何处理?我应该以哪种语言构建核心框架?

How would I go about this, and in what language should I look to build the core framework?

最终,它将在基于Ruby的项目中使用,因此任何指向特定Ruby资源的指针也将很好.

Ultimately, it will be used in a Ruby-based project, so any pointers to specific Ruby resources would be nice as well.

但是,我只是想基本了解 VSTi API的工作原理. (我已经意识到,这个问题与最初构建VST主机的问题非常相关,尽管它只能将VST输出保存到文件中而不能播放,并且范围要小得多)

However, I'm just trying to understand basically how the API of a VSTi works. (I've realized that this question is very much related to the question of building a VST host in the first place, albeit one that can only save VST outputs to file and not play them back, and with considerably smaller scope)

推荐答案

好吧,您问过,像这样的项目的理想语言将是C ++.尽管有针对更高层次语言(例如Java& Java)的包装器. .NET for VST SDK,我找不到一个适用于Ruby的版本(尽管我确实找到了一个相当酷的项目,该项目可以让您

Well, since you asked, the ideal language for a project like this is going to be C++. Although there are wrappers for higher-level languages such as Java & .NET for the VST SDK, I couldn't find one for Ruby (though I did find this rather cool project which lets you program VST plugins in Ruby). So you will be stuck doing some degree of C/C++ integration on your own.

也就是说,这里基本上有两个选择:

That said, you have basically two options here:

  1. 用C ++编写VST主机,并将其作为单独的进程从Ruby中启动.
  2. 将您的Ruby代码直接集成到VST SDK,并直接从您的代码中加载插件DLL/包.这可能是实现目标的更干净但更困难的方法.

我写了一个 VST主机前一阵子在我的博客上提供了编程教程,无论哪种情况,您都可能会觉得有用.它详细说明了如何在Mac OSX和Windows上打开VST插件并与之通信.获取主机以加载插件后,您需要能够通过从文件中读取MIDI事件或在Ruby代码与VST主机之间进行某种类型的通信(例如,将MIDI事件直接发送到插件)来将其发送到插件.命名管道,套接字,文件等).如果您不熟悉MIDI协议,请查看以下链接:

I wrote up a VST host programming tutorial on my blog awhile back which you may find useful in either case. It details how you open and communicate with VST plugins on both Mac OSX and Windows. Once you have gotten your host to load up the plugins, you need to be able to either send MIDI events directly to the plugin, either by reading them from file or some type of communication between your Ruby code and the VST host (ie, a named pipe, socket, file, etc.). If you are unfamiliar with the MIDI protocol, check out these links:

  • The MIDI technical fanatic's brainwashing center (silly name, serious resource)
  • The Sonic Spot's MIDI file specification (in case you need to read MIDI files)

您可能已经知道,VST基本上是基于块的协议.您从插件请求音频数据的小块,然后将任何MIDI事件一起发送到插件,然后再处理相应的块.确保 not 忽略MIDI delta字段.这样可以确保插件直接在所需样本上开始处理MIDI事件.否则,该插件的声音听起来会有点不合时宜,尤其是在乐器的情况下.

As you might have already figured out, VST is fundamentally a block-based protocol. You request small blocks of audio data from the plugin, and you send along any MIDI events to the plugin right before it processes that respective block. Be sure not to ignore the MIDI delta field; this will ensure that the plugin starts processing the MIDI event directly on the desired sample. Otherwise, the plugin will sound a bit off-tempo, especially in the case of instruments.

VST SDK也基于浮点块,因此您获取的所有数据将包含{-1.0 .. 1.0}范围内的单个样本.根据所需的输出格式,您可能需要将它们转换为其他格式.幸运的是,似乎有一个音频文件库的Ruby绑定,所以您也许可以将您的输出发送到其中,以便生成正确的AIFF/WAV文件.

The VST SDK is also based around floating-point blocks, so any data you get back will contain individual samples in the range { -1.0 .. 1.0 }. Depending on your desired output format, you may need to convert these to some other format. Fortunately, there seems to be a Ruby binding for the audiofile library, so you may be able to send your output into that in order to generate a proper AIFF/WAV file.

总而言之,要达到您想要的最终目标将需要做大量的工作,但是这绝不是不可能的.祝你好运!

In all, it'll be a fair amount of work to get to your desired end goal, but it's not impossible by any means. Good luck!

这篇关于如何与VST(i)插件进行编程交互以合成音频?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆