我将如何去编程与VST(I)插件交互合成音? [英] How would I go about programatically interacting with VST(i) Plugins to synthesize audio?

查看:1042
本文介绍了我将如何去编程与VST(I)插件交互合成音?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

举个例子来说,在 三角力量 ,由Tweakbench。当市场上的任何VST宿主装起来,它允许主机A(presumably MIDI)信号发送到的VSTi。然后,将的VSTi由一个软件乐器总的VSTi内设立处理该信号和输出音频合成

Take, for example, the VSTi Triforce, by Tweakbench. When loaded up in any VST host on the market, it allows the host to send a (presumably MIDI) signal to the VSTi. The VSTi will then process that signal and output synthesized audio as created by a software instrument within the VSTi.

例如,发送A4(MIDI音符,我相信)为的VSTi将导致它上面合成中东C.它发送音频数据回VST主机,然后既可以发挥它在我的发言在A或将其保存为.wav或其他一些音频文件格式。

For example, sending an A4 (MIDI note, I believe) to the VSTi will cause it to synthesize the A above Middle C. It sends the audio data back to the VST Host, which then could either play it on my speakers or save it to .wav or some other audio file format.

让我们说我有三角力量,然后我试图在我的选择,可以通过在A4笔记发送带有的VSTi互动的语言来编写一个程序来进行合成,并自动将其保存到在系统上的文件?

Let's say I have Triforce, and am trying to write a program in my language of choice that could interact with the VSTi by sending in an A4 note to be synthesized, and automatically saving it to a file on the system?

最后,我希望能够分析整个一轨MIDI文件(使用建立,已经为此提供稳定的库),并将其发送到的VSTi为渲染/综合它变成一个音频文件

Eventually, I'd like to be able to parse an entire one-track MIDI file (using established, stable libraries already available for this purpose) and send it to the VSTi to "render"/synthesize it into an audio file.

我怎么会去这一点,并在什么语言我应该着眼于打造核心框架?

How would I go about this, and in what language should I look to build the core framework?

最后,它会在一个基于Ruby的项目中使用,所以任何指向特定的Ruby资源将是很好的为好。

Ultimately, it will be used in a Ruby-based project, so any pointers to specific Ruby resources would be nice as well.

不过,我只是想了解基本的VSTi的API是如何工作的。 (我已经意识到这个问题是有很大关系摆在首位建设VST宿主的问题,尽管它只能保存VST输出到文件,并没有发挥他们回来,并用相当小的范围)

However, I'm just trying to understand basically how the API of a VSTi works. (I've realized that this question is very much related to the question of building a VST host in the first place, albeit one that can only save VST outputs to file and not play them back, and with considerably smaller scope)

先谢谢您的任何帮助=)

Thanks in advance for any help =)

推荐答案

好吧,既然你问了这样一个项目的理想语言将是C ++。虽然有高级语言如Java和放大器的封装。 .NET的VST SDK,我无法找到一个红宝石(虽然我也觉得这相当清凉项目,它可以让您程序VST插件在Ruby中)。所以,你会被卡住做一些自己的程度C / C ++集成。

Well, since you asked, the ideal language for a project like this is going to be C++. Although there are wrappers for higher-level languages such as Java & .NET for the VST SDK, I couldn't find one for Ruby (though I did find this rather cool project which lets you program VST plugins in Ruby). So you will be stuck doing some degree of C/C++ integration on your own.

这是说,你已经基本上两个选项:

That said, you have basically two options here:


  1. 写C ++中的VST主机,并启动它作为在Ruby中一个单独的进程。

  2. 直接集成你的Ruby code到VST SDK,并直接从您的code加载插件DLL的/捆绑。这可能是实现你的目标,但更清洁的方式更难。

我写了一个 VST宿主程序设计教程在我的博客一段时间回来,你可能会发现在这两种情况下非常有用。它详细介绍了如何打开并使用在Mac OSX和Windows VST插件通信。一旦你已经得到了你的主机加载了该插件,你需要能够直接发送MIDI事件的插件,通过从文件或某些类型的你的Ruby code和VST主机之间的通信(阅读它们即,命名管道,插座,文件等)。如果你不熟悉MIDI协议,请查看以下链接:

I wrote up a VST host programming tutorial on my blog awhile back which you may find useful in either case. It details how you open and communicate with VST plugins on both Mac OSX and Windows. Once you have gotten your host to load up the plugins, you need to be able to either send MIDI events directly to the plugin, either by reading them from file or some type of communication between your Ruby code and the VST host (ie, a named pipe, socket, file, etc.). If you are unfamiliar with the MIDI protocol, check out these links:

  • The MIDI technical fanatic's brainwashing center (silly name, serious resource)
  • The Sonic Spot's MIDI file specification (in case you need to read MIDI files)

正如你可能已经想通了,VST基本上是一个基于块的协议。您请求音频数据的小块从插件,你沿着任何MIDI事件发送到插件它处理相应块权利之前。要确保的的忽略MIDI三角洲场;这将确保该插件开始直接在所需的采样处理的MIDI事件。否则,插件会听起来有点假节奏,尤其是在仪器的情况下。

As you might have already figured out, VST is fundamentally a block-based protocol. You request small blocks of audio data from the plugin, and you send along any MIDI events to the plugin right before it processes that respective block. Be sure not to ignore the MIDI delta field; this will ensure that the plugin starts processing the MIDI event directly on the desired sample. Otherwise, the plugin will sound a bit off-tempo, especially in the case of instruments.

该VST SDK还围绕浮点块,让你找回所有数据将包含在范围{-1.0 .. 1.0}个别样品。根据您所需的输出格式,你可能需要将这些转换为其他格式。幸运的是,似乎有一个的Ruby的AudioFile库绑定,所以你可能能够发送您的输出入,为了产生适当的AIFF / WAV文件

The VST SDK is also based around floating-point blocks, so any data you get back will contain individual samples in the range { -1.0 .. 1.0 }. Depending on your desired output format, you may need to convert these to some other format. Fortunately, there seems to be a Ruby binding for the audiofile library, so you may be able to send your output into that in order to generate a proper AIFF/WAV file.

在所有的,这将是一个相当数量的工作来获得您想要的最终目标,但它以任何方式不是不可能的。祝你好运!

In all, it'll be a fair amount of work to get to your desired end goal, but it's not impossible by any means. Good luck!

这篇关于我将如何去编程与VST(I)插件交互合成音?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆