反转音频文件Swift/Objective-C [英] Reverse an audio file Swift/Objective-C

查看:99
本文介绍了反转音频文件Swift/Objective-C的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

有没有一种方法可以反转和导出.m4a音频文件?我在此处找到了一种解决方案,可以使音轨反向,但这似乎只能解决正在处理.caf文件格式.如果唯一的方法是使用.caf,是否有办法首先将.m4a文件转换为.caf?

Is there a way that I could reverse and export .m4a audio file? I found a solution to reverse an audio track here, but it only seems to be working on .caf file formats. If the only way is to use a .caf, is there a way to convert the .m4a file to .caf first?

更新: 在另一篇帖子中,我发现AVAssetReader可用于读取音频音频文件中的样本,但我不知道如何以相反的顺序写回样本.下面的代码段是直接从帖子中得到的答案.任何帮助,将不胜感激.谢谢

Update: In another post I found out that AVAssetReader can be used to read audio samples from an audio file, but I have no idea how to write the samples back in the reverse order. The below code snippet is an answer directly from the post. Any help would be appreciated. Thanks

+ (void) reverseAudioTrack: (AVAsset *)audioAsset outputURL: (NSURL *)outputURL {
NSError *error;

AVAssetReader* reader = [[AVAssetReader alloc] initWithAsset:audioAsset error:&error];
if (error) {NSLog(@"%@", error.localizedDescription);}

AVAssetTrack* track = [[audioAsset tracksWithMediaType:AVMediaTypeAudio] objectAtIndex:0];

NSMutableDictionary* audioReadSettings = [NSMutableDictionary dictionary];
[audioReadSettings setValue:[NSNumber numberWithInt:kAudioFormatLinearPCM]
                     forKey:AVFormatIDKey];

AVAssetReaderTrackOutput* readerOutput = [AVAssetReaderTrackOutput assetReaderTrackOutputWithTrack:track outputSettings:audioReadSettings];
[reader addOutput:readerOutput];
[reader startReading];

CMSampleBufferRef sample; //= [readerOutput copyNextSampleBuffer];
NSMutableArray *samples = [[NSMutableArray alloc] init];

// Get all samples
while((sample = [readerOutput copyNextSampleBuffer])) {
    [samples addObject:(__bridge id)sample];
    CFRelease(sample);
}

// Process samples in reverse
AudioChannelLayout acl;
bzero(&acl, sizeof(acl));
acl.mChannelLayoutTag = kAudioChannelLayoutTag_Stereo;

AVAssetWriter *writer = [[AVAssetWriter alloc] initWithURL:outputURL
                                                   fileType:AVFileTypeAppleM4A
                                                      error:&error];
if (error) {NSLog(@"%@", error.localizedDescription);}
NSDictionary *writerOutputSettings = [ NSDictionary dictionaryWithObjectsAndKeys:
                                      [ NSNumber numberWithInt: kAudioFormatAppleLossless ], AVFormatIDKey,
                                      [ NSNumber numberWithInt: 16 ], AVEncoderBitDepthHintKey,
                                      [ NSNumber numberWithFloat: 44100.0 ], AVSampleRateKey,
                                      [ NSNumber numberWithInt: 1 ], AVNumberOfChannelsKey,
                                      [ NSData dataWithBytes: &acl length: sizeof( acl ) ], AVChannelLayoutKey, nil ];

AVAssetWriterInput *audioWriterInput = [AVAssetWriterInput assetWriterInputWithMediaType:AVMediaTypeAudio outputSettings:writerOutputSettings];

[writer addInput:audioWriterInput];
[writer startWriting];
[writer startSessionAtSourceTime:CMSampleBufferGetPresentationTimeStamp((__bridge CMSampleBufferRef)samples[0]) ];

// (1) Would it work if I loop in reverse here?
for (NSInteger i = 0; i < samples.count; i++) {
    CMBlockBufferRef buffer = CMSampleBufferGetDataBuffer((__bridge CMSampleBufferRef)samples[i]);

    CMItemCount numSamplesInBuffer = CMSampleBufferGetNumSamples((__bridge CMSampleBufferRef)samples[i]);
    AudioBufferList audioBufferList;
    CMSampleBufferGetAudioBufferListWithRetainedBlockBuffer((__bridge CMSampleBufferRef)samples[i],
                                                            NULL,
                                                            &audioBufferList,
                                                            sizeof(audioBufferList),
                                                            NULL,
                                                            NULL,
                                                            kCMSampleBufferFlag_AudioBufferList_Assure16ByteAlignment,
                                                            &buffer
                                                            );

    for (int bufferCount = 0; bufferCount < audioBufferList.mNumberBuffers; bufferCount++) {
        SInt16* samples = (SInt16 *)audioBufferList.mBuffers[bufferCount].mData;
        for (int i=0; i < numSamplesInBuffer; i++) {
            // amplitude for the sample is samples[i], assuming you have linear pcm to start with

            // (2) What should I be doing to write the samples into an audio file?
        }
    }
    CFRelease(buffer);
}

推荐答案

是的,您可以通过 a 方法处理然后导出任何支持iOS的音频文件.

Yes, there is a way you can process, then export, any of the audio files for which there is iOS support.

但是,大多数这些格式(mp3仅举一例)都是有损和压缩的.您必须首先解压缩数据,应用转换,然后重新压缩.您将应用于音频信息的大多数转换都应该在原始PCM级别完成.

However, most of these formats (mp3 to name one) are lossy and compressed. You must first decompress the data, apply the transformation, and recompress. Most transformation you will apply to the audio information should likely be done at the raw, PCM level.

结合这两个语句,只需几步即可完成操作:

Combining these two statements, you do this in a few passes:

  1. 将原始文件转换为kAudioFormatLinearPCM兼容的音频文件,例如AIFF
  2. 处理该临时文件(反转其内容)
  3. 将临时文件转换回原始格式
  1. convert original file to a kAudioFormatLinearPCM compliant audio file, like AIFF
  2. process that temporary file (reverse its content)
  3. convert the temporary file back to the original format


就像您要对压缩的jpeg图像应用转换一样,该过程会有所下降.最终的音频充其量最多只能再压缩一次.


Just like if you were applying a transformation to, say, a compressed jpeg image, there will be degradation in the process. The final audio will have, at best, suffered one more compression cycle.

因此,这种方法的真正数学答案实际上是否".

So the true mathematical answer to this approach is actually no.

仅供参考,这是swift 3中的一些入门代码.它需要进一步改进以跳过文件头.

Just for reference, here is some starter code in swift 3. It needs further refinement to skip the file headers.

var outAudioFile:AudioFileID?
var pcm = AudioStreamBasicDescription(mSampleRate: 44100.0,
                                      mFormatID: kAudioFormatLinearPCM,
                                      mFormatFlags: kAudioFormatFlagIsBigEndian | kAudioFormatFlagIsSignedInteger,
                                      mBytesPerPacket: 2,
                                      mFramesPerPacket: 1,
                                      mBytesPerFrame: 2,
                                      mChannelsPerFrame: 1,
                                      mBitsPerChannel: 16,
                                      mReserved: 0)

var theErr = AudioFileCreateWithURL(destUrl as CFURL!,
                                    kAudioFileAIFFType,
                                    &pcm,
                                    .eraseFile,
                                    &outAudioFile)
if noErr == theErr, let outAudioFile = outAudioFile {
    var inAudioFile:AudioFileID?
    theErr = AudioFileOpenURL(sourceUrl as! CFURL, .readPermission, 0, &inAudioFile)

    if noErr == theErr, let inAudioFile = inAudioFile {

        var fileDataSize:UInt64 = 0
        var thePropertySize:UInt32 = UInt32(MemoryLayout<UInt64>.stride)
        theErr = AudioFileGetProperty(inAudioFile,
                                      kAudioFilePropertyAudioDataByteCount,
                                      &thePropertySize,
                                      &fileDataSize)

        if( noErr == theErr) {
            let dataSize:Int64 = Int64(fileDataSize)
            let theData = UnsafeMutableRawPointer.allocate(bytes: Int(dataSize),
                                                           alignedTo: MemoryLayout<UInt8>.alignment)

            var readPoint:Int64 = Int64(dataSize)
            var writePoint:Int64 = 0

            while( readPoint > 0 )
            {
                var bytesToRead = UInt32(2)

                AudioFileReadBytes( inAudioFile, false, readPoint, &bytesToRead, theData)
                AudioFileWriteBytes( outAudioFile, false, writePoint, &bytesToRead, theData)

                writePoint += 2
                readPoint -= 2
            }

            theData.deallocate(bytes: Int(dataSize), alignedTo: MemoryLayout<UInt8>.alignment)

            AudioFileClose(inAudioFile);
            AudioFileClose(outAudioFile);
        }
    }
}

这篇关于反转音频文件Swift/Objective-C的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆