音频混合算法的改变量 [英] Audio mixing algorithm changing volume

查看:279
本文介绍了音频混合算法的改变量的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我试图把一些音频采样以下算法:

I'm trying to mix some audio samples with the following algorithm:

short* FilterGenerator::mixSources(std::vector<RawData>rawsources, int numframes)
{
short* output = new short[numframes * 2]; // multiply 2 for channels

for (int sample = 0; sample < numframes * 2; ++sample)
{
    for (int sourceCount = 0; sourceCount < rawsources.size(); ++sourceCount)
    {
        if (sample <= rawsources.at(sourceCount).frames * 2)
        {
            short outputSample = rawsources.at(sourceCount).data[sample];
            output[sample] += outputSample;
        }
    }
}

// post mixing volume compression
for (int sample = 0; sample < numframes; ++sample)
{
    output[sample] /= (float)rawsources.size();
}

return output;
}

我得到我想要,除了一个事实,即当其中一个来源都做了,其他的来源开始播放响亮的输出。我知道这是为什么,但我不知道如何正确地解决这个问题。

I get the output I want except for the fact that when one of the sources are done, the other sources start playing louder. I know why this is but I don't know how to solve it properly.

此外,这是从音频I输出Audacity的截图:

Also, this is a screenshot from Audacity from the audio I output:

正如你可以看到有绝对的东西是错误的。你可以看到,声音没有得到零为中心了,你可以看到声音越来越响一次的来源之一是进行播放。

As you can see there's definitely something wrong. You can see that the audio hasn't got zero at the center anymore and you can see the audio getting louder once one of the sources are done playing.

最重要的是我想解决这个体积问题,但任何其他的调整,我可以做的非常AP preciated!

Most of all I'd like to fix the volume problem but any other tweaks I can do are very appreciated!

一些额外的信息:我知道,这code不允许单一来源,但没关系。我只打算用立体交错的声音样本。

Some extra info: I know that this code doesn't allow mono sources but that's ok. I'm only going to use stereo interleaved audio samples.

推荐答案

一般混合不通过源的数量除以。这意味着,有一个静音的轨道可以减少一半的幅度混个正常的轨道。如果你想你可以最终正常化的轨道,使其在其范围

Usually mixing don't divide by the number of sources. This mean that mix a normal track with a mute track can halve its amplitude. If you want you can eventually normalize the track so that it is in his range.

在code未经过测试,有可能是错误的:

The code is not tested, there may be errors:

#include <algorithm> // for std::max 
#include <cmath>     // for std::fabs

short* FilterGenerator::mixSources(std::vector<RawData>rawsources, int numframes)
{
  // We can not use shorts immediately because can overflow
  // I use floats because in the renormalization not have distortions
  float *outputFloating = new float [numframes * 2];

  // The maximum of the absolute value of the signal 
  float maximumOutput = 0;

  for (int sample = 0; sample < numframes * 2; ++sample)
  {
      // makes sure that at the beginning is zero
      outputFloating[sample] = 0;

      for (int sourceCount = 0; sourceCount < rawsources.size(); ++sourceCount)
      {
          // I think that should be a '<'
          if (sample < rawsources.at(sourceCount).frames * 2)
              outputFloating[sample] += rawsources.at(sourceCount).data[sample];  
      }

      // Calculates the maximum
      maximumOutput = std::max (maximumOutput, std::fabs(outputFloating[sample]));
  }  

  // A short buffer
  short* output = new short [numframes * 2]; // multiply 2 for channels

  float multiplier = maximumOutput > 32767 ? 32767 / maximumOutput : 1;

  // Renormalize the track
  for (int sample = 0; sample < numframes * 2; ++sample)
      output[sample] = (short) (outputFloating[sample] * multiplier); 

  delete[] outputFloating;
  return output;
}

这篇关于音频混合算法的改变量的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆