ARKit – Spatial Audio 几乎不随距离改变音量 [英] ARKit – Spatial Audio barely changes the volume over distance

查看:44
本文介绍了ARKit – Spatial Audio 几乎不随距离改变音量的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我创建了一个 SCNNode 并向其添加了一个音频.

这是一个单声道音频.一切都设置正确.

它作为 Spatial Audio 工作,这不是问题.

问题是当我靠近或远离时,它几乎不会改变音量.我知道如果我离得很远它会改变,但这与 Apple 在这里展示的完全不同:

https://youtu.be/d9kb1LfNNU4?t=23

其他一些游戏我看到音频音量确实在一步距离变化.

我的,一步你甚至无法分辨音量的变化.您至少需要 4 个步骤.

有人知道为什么吗?

代码如下:

SCNNode *audioNode = [[SCNNode alloc] init];SCNAudioSource *audioSource = [[SCNAudioSource alloc] initWithFileNamed:audioFileName];audioSource.loops = YES;[音频源加载];audioSource.volume = 0.05;//<-- 我使用了不同的值.也不会改变太多audioSource.positional = 是;//audioSource.shouldStream = NO;//<-- 没有区别[audioNode addAudioPlayer:[SCNAudioPlayer audioPlayerWithSource:audioSource]];[audioNode runAction:[SCNAction playAudioSource:audioSource waitForCompletion:NO] completionHandler:nil];[massNode addChildNode:audioNode];

也许是节点的规模?

整个场景大约有 4 英尺.

当我添加一个对象时,我通常将其缩放到 0.005(否则它会变得太大).但我也尝试过使用 .scn 文件中大小合适的文件.

它不应该影响任何东西,因为结果是一个咖啡桌大小的场景,我可以很好地看到物体.

解决方案

更新.

这是控制声音衰减的工作代码(适用于 iOS 和 macOS):

导入 AVFoundation导入 ARKit类视图控制器:UIViewController,AVAudioMixing {@IBOutlet var sceneView:SCNView!//@IBOutlet var sceneView: ARSCNView!func 目的地(forMixer 混音器:AVAudioNode,总线:AVAudioNodeBus)->AVAudioMixingDestination?{返回零}无功体积:浮点数 = 0.0var pan:浮点数 = 0.0var sourceMode: AVAudio3DMixingSourceMode = .bypassvar pointSourceInHeadMode: AVAudio3DMixingPointSourceInHeadMode = .bypassvar renderingAlgorithm = AVAudio3DMixingRenderingAlgorithm.spherealHead无功率:浮点数 = 1.2var reverbBlend:浮点数 = 40.0var 阻塞:Float = -100.0var 遮挡:Float = -100.0var 位置 = AVAudio3DPoint(x: 0, y: 0, z: 10)让 audioNode = SCNNode()覆盖 func viewDidLoad() {super.viewDidLoad()让 myScene = SCNScene()让 cameraNode = SCNNode()cameraNode.camera = SCNCamera()cameraNode.position = SCNVector3(0, 0, 0)myScene.rootNode.addChildNode(cameraNode)//让sceneView = 查看为!SCN视图SceneView.scene = myScenesceneView.backgroundColor = UIColor.orange让 myPath = Bundle.main.path(forResource: "Mono_Audio", ofType: "mp3")让 myURL = URL(fileURLWithPath: myPath!)让 mySource = SCNAudioSource(url: myURL)!mySource.loops = 真mySource.isPositional = true//位置音频mySource.shouldStream = false//位置音频为 FALSEmySource.volume = 音量mySource.reverbBlend = reverbBlendmySource.rate = 费率mySource.load()让播放器 = SCNAudioPlayer(来源:mySource)让球体:SCNGeometry = SCNSphere(半径:0.1)让 sphereNode = SCNNode(几何:球体)sphereNode.addChildNode(audioNode)myScene.rootNode.addChildNode(sphereNode)audioNode.addAudioPlayer(播放器)sceneView.audioEnvironmentNode.distanceAttenuationParameters.maximumDistance = 2SceneView.audioEnvironmentNode.distanceAttenuationParameters.referenceDistance = 0.1sceneView.audioEnvironmentNode.renderingAlgorithm = .auto//sceneView.audioEnvironmentNode.reverbParameters.enable = true//sceneView.audioEnvironmentNode.reverbParameters.loadFactoryReverbPreset(.plate)让hither = SCNAction.moveBy(x: 0, y: 0, z: 1, duration: 2)让那里 = SCNAction.moveBy(x: 0, y: 0, z: -1, duration: 2)让序列 = SCNAction.sequence([到那里,到那里])让循环 = SCNAction.repeatForever(sequence)sphereNode.runAction(循环)}}

<块引用>

而且,是的,您说得对 - 有一些强制性设置.

但有 7 个:

  • 使用 AVAudioMixing 协议及其存根(属性和方法).

  • 使用 MONO 音频文件.

  • 使用 source.isPositional = true.

  • 使用source.shouldStream = false.

  • maximumDistance 值分配给 distanceAttenuationParameters 属性.

  • referenceDistance 值分配给 distanceAttenuationParameters 属性.

  • mySource.load() 的位置在您的代码中非常重要.

<块引用>

P.S.如果上述提示对您没有帮助,那么使用额外的实例属性,使用隐式侦听器的图形、障碍物和方向使您的声音更加安静:

var rolloffFactor: Float { get set }//衰减图,默认 = 1var 阻塞:Float { get set }//默认值 = 0.0var occlusion: Float { get set }//默认值 = 0.0var listenerAngularOrientation: AVAudio3DAngularOrientation { get set }//(0,0,0)

如果你用 Objective-C 编写它肯定会起作用.

在这个例子中,audioNode 距离 listener 的距离是 1 米.

I created a SCNNode and added an Audio to it.

It is a Mono audio. Everything is set up correctly.

It is working as Spatial Audio, that's not the problem.

The problem is that as i get closer or far away it barely changes the volume. I know it changes if i get very very far away, but it's nothing like Apple demonstrated here:

https://youtu.be/d9kb1LfNNU4?t=23

Some other games i see the audio volume really changing from one step distance.

With mine, with one step you can't even tell the volume changed. You need at least 4 steps.

Anyone has any clue why?

Code bellow:

SCNNode *audioNode = [[SCNNode alloc] init];
SCNAudioSource *audioSource = [[SCNAudioSource alloc] initWithFileNamed:audioFileName];
audioSource.loops = YES;
[audioSource load];
audioSource.volume = 0.05; // <-- i used different values. won't change much either
audioSource.positional = YES;
//audioSource.shouldStream = NO; // <-- makes no difference
[audioNode addAudioPlayer:[SCNAudioPlayer audioPlayerWithSource:audioSource]];

[audioNode runAction:[SCNAction playAudioSource:audioSource waitForCompletion:NO] completionHandler:nil];
[massNode addChildNode:audioNode];

Maybe scale of the nodes?

The whole scene is the size of around 4 feet.

When i add an object i usually scale it to 0.005 (otherwise it gets way too big). But i also tried with one that was already in the right size from .scn file.

It shouldn't affect anything tho, since the result is a coffee table size scene and i can see the objects alright.

解决方案

Updated.

Here's a working code for controlling sound's decay (works in iOS and macOS):

import AVFoundation
import ARKit

class ViewController: UIViewController, AVAudioMixing {

    @IBOutlet var sceneView: SCNView!
    // @IBOutlet var sceneView: ARSCNView!
    
    func destination(forMixer mixer: AVAudioNode,
                                bus: AVAudioNodeBus) -> AVAudioMixingDestination? {
        return nil
    }
    var volume: Float = 0.0
    var pan: Float = 0.0
    
    var sourceMode: AVAudio3DMixingSourceMode = .bypass
    var pointSourceInHeadMode: AVAudio3DMixingPointSourceInHeadMode = .bypass
    
    var renderingAlgorithm = AVAudio3DMixingRenderingAlgorithm.sphericalHead
    var rate: Float = 1.2
    var reverbBlend: Float = 40.0
    var obstruction: Float = -100.0
    var occlusion: Float = -100.0
    var position = AVAudio3DPoint(x: 0, y: 0, z: 10)
    let audioNode = SCNNode()
    
    override func viewDidLoad() {
        super.viewDidLoad()
        let myScene = SCNScene()
        let cameraNode = SCNNode()
        cameraNode.camera = SCNCamera()
        cameraNode.position = SCNVector3(0, 0, 0)
        myScene.rootNode.addChildNode(cameraNode)
        
        // let sceneView = view as! SCNView
        sceneView.scene = myScene
        sceneView.backgroundColor = UIColor.orange
        
        let myPath = Bundle.main.path(forResource: "Mono_Audio", ofType: "mp3")           
        let myURL = URL(fileURLWithPath: myPath!)
        let mySource = SCNAudioSource(url: myURL)!
        mySource.loops = true
        mySource.isPositional = true           // Positional Audio
        mySource.shouldStream = false          // FALSE for Positional Audio
        mySource.volume = volume
        mySource.reverbBlend = reverbBlend
        mySource.rate = rate

        mySource.load()
        
        let player = SCNAudioPlayer(source: mySource)
        let sphere: SCNGeometry = SCNSphere(radius: 0.1)
        let sphereNode = SCNNode(geometry: sphere)
        sphereNode.addChildNode(audioNode)
        myScene.rootNode.addChildNode(sphereNode)
        audioNode.addAudioPlayer(player)            

        sceneView.audioEnvironmentNode.distanceAttenuationParameters.maximumDistance = 2
        sceneView.audioEnvironmentNode.distanceAttenuationParameters.referenceDistance = 0.1   
        sceneView.audioEnvironmentNode.renderingAlgorithm = .auto

        // sceneView.audioEnvironmentNode.reverbParameters.enable = true
        // sceneView.audioEnvironmentNode.reverbParameters.loadFactoryReverbPreset(.plate)
        
        let hither = SCNAction.moveBy(x: 0, y: 0, z: 1, duration: 2)
        let thither = SCNAction.moveBy(x: 0, y: 0, z: -1, duration: 2)
        
        let sequence = SCNAction.sequence([hither, thither])
        let loop = SCNAction.repeatForever(sequence)
        sphereNode.runAction(loop) 
    }
}

And, yes, you're absolutely right – there are some obligatory settings.

But there are 7 of them:

  • use AVAudioMixing protocol with its stubs (properties and methods).

  • use MONO audio file.

  • use source.isPositional = true.

  • use source.shouldStream = false.

  • assign maximumDistance value to distanceAttenuationParameters property.

  • assign referenceDistance value to distanceAttenuationParameters property.

  • and location of mySource.load() is very important in your code.

P.S. If the aforementioned tips didn't help you, then use additional instance properties to make your sound even quieter using a graph, obstacles and orientation of implicit listener:

var rolloffFactor: Float { get set }      // attenuation's graph, default = 1 

var obstruction: Float { get set }        // default = 0.0

var occlusion: Float { get set }          // default = 0.0

var listenerAngularOrientation: AVAudio3DAngularOrientation { get set } //(0,0,0)

It definitely works if you'll write it in Objective-C.

In this example the distance of audioNode is 1 meter away from a listener.

这篇关于ARKit – Spatial Audio 几乎不随距离改变音量的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆