目标C/IOS-将音频流保存到文件 [英] Objective C/IOS - Save Audio Stream To File

查看:210
本文介绍了目标C/IOS-将音频流保存到文件的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

是否可以/是否有将音频流保存到本地文件的方法?我经验不足,我不知道从哪里开始,朝着正确方向的一点将不胜感激.

Is it possible/Is there a way to save an audio stream to a local file? I'm very inexperienced and I dont know where to start -- a point in the right direction would be appreciated.

我正在构建一个可从URL流音频的应用程序(例如 http://audiostream.mp3 ).我希望用户能够从开始收听到结束播放的音频保存到文件中.由于音频将是流,所以我担心的是,我没有处理完整的mp3文件,我想这要比简单地下载完整的mp3更为棘手.

I'm building an app that streams audio from URLs (eg http://audiostream.mp3). I'd like the user to be able to save audio, to a file, from when they start listening to when they finish. As the audio will be a stream, my concern is that i'm not dealing with a complete mp3 file -- i'm guessing this makes things trickier than simply downloading a complete mp3.

在此先感谢您的帮助.

推荐答案

您可以无限期保存实时音频流.您需要使用ResourceLoaderURlSessionDataTask.

You can save live audio stream with indefinite duration. You need to use ResourceLoader and URlSessionDataTask.

这是一个棘手的过程,所以我将逐步解释:

It's a bit tricky process, so I will explain step by step:

  1. 制作AVURLAsset:

  1. Making an AVURLAsset:

let asset = AVURLAsset(url: urlWithCustomScheme)

  • 设置代表:

  • Set the delegate:

    asset.resourceLoader.setDelegate(resourceLoaderDelegate, queue: DispatchQueue.main)
    

  • 以下委托函数将被调用:

  • Following delegate function will be called:

    func resourceLoader(_ resourceLoader: AVAssetResourceLoader, shouldWaitForLoadingOfRequestedResource loadingRequest: AVAssetResourceLoadingRequest) -> Bool
    

  • 现在在此函数内部,我们将调用一个函数来启动数据请求,以便可以开始下载声音缓冲区:

  • Now inside this function we will call a function to start the data request, so that downloading of sound buffer can start:

    func startDataRequest(with url: URL) {
        var recordingName = "default.mp3"
        if let recording = owner?.recordingName{
            recordingName = recording
        }
        fileURL = try! FileManager.default.url(for: .documentDirectory, in: .userDomainMask, appropriateFor: nil, create: false)
            .appendingPathComponent(recordingName)
        let configuration = URLSessionConfiguration.default
        configuration.requestCachePolicy = .reloadIgnoringLocalAndRemoteCacheData
        session = URLSession(configuration: configuration, delegate: self, delegateQueue: nil)
        session?.dataTask(with: url).resume()
        outputStream = OutputStream(url: fileURL, append: true)
        outputStream?.schedule(in: RunLoop.current, forMode: RunLoopMode.defaultRunLoopMode)
        outputStream?.open()
    }
    

    如您所见,我们已经打开了输出流.我们将使用此流将缓冲区写入磁盘.

    as you can see we have opened a output stream. We will use this stream to write buffer to our disk.

    现在,当我们使用给定的URL启动dataTask时,我们将开始将数据字节接收到委托函数中:

    Now as we initiated a dataTask with the given URL, we will start receiving data bytes into the delegate function:

    func urlSession(_ session: URLSession, dataTask: URLSessionDataTask, didReceive data: Data)
    

  • 现在我们已经开始接收实时音频流.最后一个难题是存储音频流.这是最简单的部分.我们只是创建一个OutputStream对象,将其打开,然后在上面的委托函数中附加要接收的字节,就这样,我们保存了实时音频流的所需部分.

  • Now we have started receiving live audio stream. Last piece of the puzzle is to store the audio stream. It is the simplest part. We just create an OutputStream object, open it then append the bytes we are receiving in the above delegate function and thats it, we saved the desired part of the live audio stream.

    func urlSession(_ session: URLSession, dataTask: URLSessionDataTask, didReceive data: Data) {
        let bytesWritten = data.withUnsafeBytes{outputStream?.write($0, maxLength: data.count)}
        print("bytes written :\(bytesWritten!) to \(fileURL)")
    }
    

  • 这是我在Medium博客上写的源代码,用于解决您面临的问题: https://github.com/pandey-mohan/LiveAudioCapture

    Here is the source code that I wrote on a blog in Medium, solving the problem that you faced: https://github.com/pandey-mohan/LiveAudioCapture

    这篇关于目标C/IOS-将音频流保存到文件的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

    查看全文
    登录 关闭
    扫码关注1秒登录
    发送“验证码”获取 | 15天全站免登陆