使用AVAudioEngine安排低延迟节拍器的声音 [英] Using AVAudioEngine to schedule sounds for low-latency metronome

查看:120
本文介绍了使用AVAudioEngine安排低延迟节拍器的声音的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我正在创建一个节拍器作为较大应用程序的一部分,并且我有一些非常短的wav文件可以用作单独的声音.我想使用AVAudioEngine,因为NSTimer存在严重的延迟问题,而Core Audio似乎很难在Swift中实现.我正在尝试以下操作,但目前无法执行前三个步骤,我想知道是否有更好的方法.

I am creating a metronome as part of a larger app and I have a few very short wav files to use as the individual sounds. I would like to use AVAudioEngine because NSTimer has significant latency problems and Core Audio seems rather daunting to implement in Swift. I'm attempting the following, but I'm currently unable to implement the first 3 steps and I'm wondering if there is a better way.

代码大纲:

  1. 根据节拍器的当前设置(每个小节的节拍数和每个节拍的细分;节拍的文件A,细分的文件B)创建文件URL数组
  2. 根据文件的速度和长度,以编程方式创建一个具有适当数量的静音帧的wav文件,并将其插入每个声音之间的数组中
  3. 将这些文件读入单个AudioBuffer或AudioBufferList
  4. audioPlayer.scheduleBuffer(buffer, atTime:nil, options:.Loops, completionHandler:nil)
  1. Create an array of file URLs according to the metronome's current settings (number of beats per bar and subdivisions per beat; file A for beats, file B for subdivisions)
  2. Programmatically create a wav file with the appropriate number of frames of silence, based on the tempo and the length of the files, and insert it into the array between each of the sounds
  3. Read those files into a single AudioBuffer or AudioBufferList
  4. audioPlayer.scheduleBuffer(buffer, atTime:nil, options:.Loops, completionHandler:nil)

到目前为止,我已经能够播放单个声音文件的循环缓冲区(第4步),但是我无法从文件数组构造缓冲区或以编程方式创建静音,也找不到任何在StackOverflow上解决此问题的答案.所以我猜这不是最好的方法.

So far I have been able to play a looping buffer (step 4) of a single sound file, but I haven't been able to construct a buffer from an array of files or create silence programmatically, nor have I found any answers on StackOverflow that address this. So I'm guessing that this isn't the best approach.

我的问题是:是否可以使用AVAudioEngine安排低延迟的声音序列,然后循环播放该序列?如果不是,那么在Swift中进行编码时,哪种框架/方法最适合安排声音?

推荐答案

我能够制作一个包含文件声音和所需长度的静音的缓冲区.希望这会有所帮助:

I was able to make a buffer containing sound from file and silence of required length. Hope this will help:

// audioFile here – an instance of AVAudioFile initialized with wav-file
func tickBuffer(forBpm bpm: Int) -> AVAudioPCMBuffer {
    audioFile.framePosition = 0 // position in file from where to read, required if you're read several times from one AVAudioFile
    let periodLength = AVAudioFrameCount(audioFile.processingFormat.sampleRate * 60 / Double(bpm)) // tick's length for given bpm (sound length + silence length)
    let buffer = AVAudioPCMBuffer(PCMFormat: audioFile.processingFormat, frameCapacity: periodLength)
    try! audioFile.readIntoBuffer(buffer) // sorry for forcing try
    buffer.frameLength = periodLength // key to success. This will append silcence to sound
    return buffer
}

// player – instance of AVAudioPlayerNode within your AVAudioEngine
func startLoop() {
    player.stop()
    let buffer = tickBuffer(forBpm: bpm)
    player.scheduleBuffer(buffer, atTime: nil, options: .Loops, completionHandler: nil)
    player.play()
}

这篇关于使用AVAudioEngine安排低延迟节拍器的声音的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆