从目标C通过声音(WAV)文件的JavaScript [英] Passing Sound (wav) file to javascript from objective c

查看:493
本文介绍了从目标C通过声音(WAV)文件的JavaScript的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我在录音目标C.声音文件(WAV格式),我想用Objective C中的 stringByEvaluatingJavaScriptFromString 通过这回的JavaScript。我在想,我将不得不WAV文件转换为Base64字符串,将它传递给这个函数。然后,我将不得不为base64字符串转换回(WAV / BLOB)格式的JavaScript将它传递给音频标签播放。我不知道我该怎么做呢?也不知道这是通过波形文件回到JavaScript的最好方法?任何想法将AP preciated。


解决方案

好,这不是我想象的简单。所以这里是我是如何能够做到这一点。

第1步:用我记录AudioRecorder的音频格式CAF

 的NSArray * dirPaths;
* NSString的DOCSDIR;dirPaths = NSSearchPathForDirectoriesInDomains(NSDocumentDirectory,NSUserDomainMask,YES);DOCSDIR = [dirPaths objectAtIndex:0];soundFilePath = [DOCSDIR stringByAppendingPathComponent:@sound.caf];NSURL * soundFileURL = [NSURL fileURLWithPath:soundFilePath];*的NSDictionary = recordSettings [NSDictionary的dictionaryWithObjectsAndKeys:
    [NSNumber的numberWithInt:AVAudioQualityMin]
    艾文coderAudioQualityKey,
    [的NSNumber numberWithInt:16],
    艾文coderBitRateKey,
    [的NSNumber numberWithInt:2],
    AVNumberOfChannelsKey,
    [NSNumber的numberWithFloat:44100]
                                AVSampleRateKey,
    零];NSError *误差=零;audioRecorder = [[AVAudioRecorder页头]
                 initWithURL:soundFileURL
                 设置:recordSettings错误:&放大器;错误]如果(错误)
{
    的NSLog(@错误:%@,[错误localizedDescription]);
}其他{
    [audioRecorder prepareToRecord]。
}

在此之后,你只需要调用audioRecorder.record录制的声音。将记录
在CAF格式。如果你想看到我的recordAudio功能,那么在这里。

 (无效)recordAudio
   {
    如果(!audioRecorder.recording)
     {
         _playButton.enabled = NO;
         _recordButton.title = @停止;
         [audioRecorder记录]
         [个体经营animate1:​​无成品:无背景:无];     }
    其他
    {
       [_recordingImage stopAnimating]
       [audioRecorder站]
       _playButton.enabled = YES;
      _recordButton.title = @记录;
    }
  }

第2步:转换CAF格式WAV格式。这个我可以用下面的函数来执行。

   - (BOOL)exportAssetAsWaveFor​​mat:(* NSString的)文件路径
{
   NSError *误差=零;*的NSDictionary = audioSetting [NSDictionary的dictionaryWithObjectsAndKeys:
                              [NSNumber的numberWithFloat:44100.0],AVSampleRateKey,
                              [的NSNumber numberWithInt:2],AVNumberOfChannelsKey,
                              [的NSNumber numberWithInt:16],AVLinearPCMBitDepthKey,
                              [NSNumber的numberWithInt:kAudioFormatLinearPCM],AVFormatIDKey,
                              [NSNumber的numberWithBool:NO],AVLinearPCMIsFloatKey,
                              [的NSNumber numberWithBool:0],AVLinearPCMIsBigEndianKey,
                              [NSNumber的numberWithBool:NO],AVLinearPCMIsNonInterleaved,
                              [NSData的数据],AVChannelLayoutKey,零]* NSString的= audioFilePath文件路径;
AVURLAsset * URLAsset = [[AVURLAsset页头] initWithURL:[NSURL fileURLWithPath:audioFilePath]选项:无];如果(!URLAsset)返回NO;AVAssetReader * assetReader = [AVAssetReader assetReaderWithAsset:URLAsset错误:&放大器;错误]
如果(误差)返回NO;NSArray的*曲目= [URLAsset tracksWithMediaType:AVMediaTypeAudio];
如果(![曲目计数])返回NO;AVAssetReaderAudioMixOutput * audioMixOutput = [AVAssetReaderAudioMixOutput
                                               assetReaderAudioMixOutputWithAudioTracks:轨道
                                               audioSettings:audioSetting];(![assetReader canAddOutput:audioMixOutput]),如果返回NO;[assetReader addOutput:audioMixOutput];(![assetReader startReading]),如果返回NO;* NSString的标题= @WavConverted
NSArray的* docDirs = NSSearchPathForDirectoriesInDomains(NSDocumentDirectory,NSUserDomainMask,YES);
的NSString * docDir = [docDirs objectAtIndex:0];
* NSString的outpath中= [[docDir stringByAppendingPathComponent:标题]
                     stringByAppendingPathExtension:@WAV];如果([的NSFileManager defaultManager] removeItemAtPath:outpath中的错误:NULL]!)
{
    返回NO;
}soundFilePath = outpath中;NSURL * outURL = [NSURL fileURLWithPath:outpath中];
AVAssetWriter * assetWriter = [AVAssetWriter assetWriterWithURL:outURL
                                                      的fileType:AVFileTypeWAVE
                                                         错误:&放大器;错误]
如果(误差)返回NO;AVAssetWriterInput * assetWriterInput = [AVAssetWriterInput assetWriterInputWithMediaType:AVMediaTypeAudio
                                                                            outputSettings:audioSetting];
assetWriterInput。 expectsMediaDataInRealTime = NO;如果(![assetWriter canAddInput:assetWriterInput])返回NO;[assetWriter addInput:assetWriterInput];(![assetWriter startWriting]),如果返回NO;
// [assetReader保留]
// [assetWriter保留][assetWriter startSessionAtSourceTime:kCMTimeZero];dispatch_queue_t队列= dispatch_queue_create(assetWriterQueue,NULL);[assetWriterInput requestMediaDataWhenReadyOnQueue:排队usingBlock:^ {    的NSLog(@开始);    而(1)
    {
        如果([assetWriterInput isReadyForMoreMediaData]放大器;及(assetReader.status == AVAssetReaderStatusReading)){            CMSampleBufferRef sampleBuffer = [audioMixOutput copyNextSampleBuffer]            如果(sampleBuffer){
                [assetWriterInput appendSampleBuffer:sampleBuffer];
                CFRelease(sampleBuffer);
            }其他{
                [assetWriterInput markAsFinished]
                打破;
            }
        }
    }    [assetWriter finishWriting]    // [自playWavFile]
    NSError *犯错;
    NSData的* audioData = [NSData的dataWithContentsOfFile:soundFilePath选项:0错误:&放大器; ERR];
    [self.audioDelegate doneRecording:audioData];
    // [assetReader发布]
    // [assetWriter发布]
    的NSLog(@soundFilePath =%@,soundFilePath);
    *的NSDictionary字典= [[的NSFileManager defaultManager] attributesOfItemAtPath:soundFilePath错误:&放大器; ERR];
    的NSLog(@wav文件的大小=%@,[字典objectForKey:NSFileSize]);
    //的NSLog(@完成);
}];

做好这个功能,我打电话audioDelegate功能doneRecording与audioData这是
以WAV格式。这里是code为doneRecording。

   - (无效)doneRecording:(NSData的*)内容
{
myContents = [[NSData的dataWithData:内容]保留]
[个体经营returnResult:alertCallbackId ARGS:@录制完成。零]
}//调用此函数时,你有结果发回给JavaScript回调
 // callbackId:INT来自handleCall功能// ARGS:对象列表发送到JavaScript回调
- (无效)returnResult:(INT)callbackId ARGS:(ID)阿根廷,...;
{
  如果(callbackId == 0)返回;  va_list的argsList;
  NSMutableArray里* resultArray = [[NSMutableArray里的alloc]初始化];  如果(ARG!=无){
    [resultArray ADDOBJECT:ARG];
    的va_start(argsList,ARG);
    而((ARG =在va_arg(argsList,ID))!=无)
      [resultArray ADDOBJECT:ARG];
    va_end用来(argsList);
  }   * NSString的resultArrayString = [JSON stringWithObject:resultArray allowScalar:YES错误:无];
   [个体经营performSelectorOnMainThread:@selector(stringByEvaluatingJavaScriptFromString :) withObject:[的NSString stringWithFormat:@NativeBridge.resultForCallback(%D,%@);,callbackId,resultArrayString] waitUntilDone:NO];
   [resultArray发布]
}

第3步:现在是时候里面的UIWebView回传达给JavaScript,我们正在做记录
音频这样你就可以开始从美国接受的数据块。我使用的WebSockets到
数据传输回的JavaScript。这些数据将在块转移
因为我使用的服务器( https://github.com/benlodotcom/BLWebSocketsServer ),用构建
libwebsockets( http://git.warmcat.com/cgi-bin/cgit/libwebsockets/ )。

这是你如何在委托类启动服务器。

   - (ID)initWithFrame:方法(的CGRect)帧
{
  如果(自= [超级initWithFrame:方法框架]){      [个体经营_createServer]
      [self.server开始];
      myContents = [NSData的数据];    //为了将委托给shouldStartLoadWithRequest被称为
    self.delegate =自我;    //设置非不透明,以使身体{背景颜色:透明}的工作!
    self.opaque = NO;    //实例化JSON解析器库
    JSON = [SBJSON新]    //加载我们的html文件
    * NSString的路径= [[一个NSBundle mainBundle] pathForResource:@的WebView文档ofType:@HTML];
    [个体经营的loadRequest:[的NSURLRequest requestWithURL:[NSURL fileURLWithPath:路径]]];  }
  返回自我;
}
- (无效)_createServer
{
    / *创建一个简单的echo服务器* /
    self.server = [[BLWebSocketsServer页头] initWithPort:9000 andProtocolName:echoProtocol];
    [self.server setHandleRequestBlock:^ *的NSData(NSData的*数据){        * NSString的convertedString = [[NSString的页头] initWithData:数据编码:NSUTF8StringEncoding];
        的NSLog(@接收到的请求......%@,convertedString);        如果([convertedString isEqualToString:@开始])
        {
            的NSLog(@myContents大小数:%d,[myContents长度]);            INT contentSize = [myContents长度]
            INT CHUNKSIZE = 64 * 1023;
            chunksCount =(myContents长度] /(64 * 1023))+ 1;            的NSLog(@CHUNKSIZE =%d个,CHUNKSIZE);
            的NSLog(@chunksCount =%d个,chunksCount);            chunksArray = [[NSMutableArray的阵列]保留];            INT索引= 0;
            // NSRange chunkRange;            的for(int i = 1; I< = chunksCount;我++)
            {                如果(我== chunksCount)
                {
                    NSRange chunkRange = {指数,contentSize指数};
                    的NSLog(@块#=%d个,chunkRange =(%D,%D),我,指数,contentSize指数);
                    NSData的* dataChunk = [myContents subdataWithRange:chunkRange];
                    [chunksArray ADDOBJECT:dataChunk];
                    打破;
                }
                其他
                {
                    NSRange chunkRange = {指数,CHUNKSIZE};
                    的NSLog(@块#=%d个,chunkRange =(%D,%D),我,指数,CHUNKSIZE);
                    NSData的* dataChunk = [myContents subdataWithRange:chunkRange];
                    指数+ = CHUNKSIZE;
                    [chunksArray ADDOBJECT:dataChunk];
                }
            }            返回[chunksArray objectAtIndex:0];        }
        其他
        {
            INT chunkNumber = [convertedString的intValue];            如果(chunkNumber大于0&放大器;及(chunkNumber + 1) - ; = chunksCount)
            {
                返回[chunksArray objectAtIndex:(chunkNumber)];
            }
        }        的NSLog(@释放阵列);
        [chunksArray发布]
        chunksCount = 0;
        返回[NSData的dataWithBase64En codedString:@停止];
    }];
}

在JavaScript端code是

  VAR插座;
变种chunkCount = 0;
VAR soundBlob,soundUrl;
VAR smallBlobs =新的Array();功能captureMovieCallback(响应)
{
    如果(插座)
    {
        尝试{
            socket.send('开始');
        }
        赶上(E)
        {
            日志('Socket是不是有效的对象');
        }    }
    其他
    {
        日志('插座为空);
    }
}功能关闭套接字(响应)
{
    socket.close();
}
功能连接(){
    尝试{
        window.WebSocket = window.WebSocket || window.MozWebSocket;        插座=新的WebSocket(WS://127.0.0.1:9000',
                                      回声协议');        socket.onopen =功能(){
        }        socket.onmessage =功能(E){
            VAR数据= e.data;
            如果(e.data的instanceof ArrayBuffer)
            {
                日志('其arrayBuffer');
            }
            否则,如果(e.data的instanceof BLOB)
            {
                如果(soundBlob)
                   日志(它的大小斑点='+ e.data.size +'最后的BLOB大小:'+ soundBlob.size);                如果(e.data.size!= 3)
                {
                    //日志(它的大小斑点='+ e.data.size);
                    smallBlobs [chunkCount] = e.data;
                    chunkCount = chunkCount +1;
                    socket.send(''+ chunkCount);
                }
                其他
                {
                    //警报('结束时收到');
                    尝试{
                    soundBlob =新的Blob(smallBlobs,{类型:音频/ WAV});
                    VAR myURL = window.URL || window.webkitUR​​L;
                    soundUrl = myURL.createObjectURL(soundBlob);
                    日志('soundURL ='+ soundUrl);
                    }
                    赶上(E)
                    {
                        日志(问题创造BLOB和网址。');
                    }                    尝试{
                        VAR的serverUrl ='http://10.44.45.74:8080/MyTestProject/WebRecording?record';
                        VAR XHR =新XMLHtt prequest();
                        xhr.open('POST',的serverUrl,真正的);
                        xhr.setRequestHeader(内容类型,多部分/表单数据);
                        xhr.send(soundBlob);
                    }
                    赶上(E)
                    {
                        日志(错误上传文件的blob');
                    }                    socket.close();
                }                //alert(JSON.stringify(msg,空,4));
            }
            其他
            {
                日志('不知道');
            }
        }        socket.onclose =功能(){
            //消息('< p =班事件>插座状态:'+ socket.readyState +'(闭)');
            日志(最后的BLOB大小:'+ soundBlob.size);
        }    }赶上(例外){
       日志('< P>错误:'+除外);
    }
}功能日志(MSG){
    NativeBridge.log(MSG);
}
功能stopCapture(){
    NativeBridge.call(stopMovie,NULL,NULL);
}功能startCapture(){
    NativeBridge.call(captureMovie,空,captureMovieCallback);
}

NativeBridge.js

  VAR NativeBridge = {
  callbacksCount:1,
  回调:{},  //母语层自动调用,当结果可用
  resultForCallback:功能resultForCallback(callbackId,resultArray){
    尝试{
    VAR回调= NativeBridge.callbacks [callbackId]
    如果回报(回调!);
    的console.log(呼吁回调+ callbackId);
    callback.apply(NULL,resultArray);
    }赶上(E){警报(E)}
  },  //使用这个在JavaScript中请求原生的Objective-C code
  // functionName:字符串(我觉得这个名字是明确的:P)
  // args:参数数组
  //回调:与即将正参数功能时,本机code回到被称为
  调用:函数调用(functionName,而ARGS,回调){    //警报(呼);
    //警报('回调='+回调);
    VAR hasCallback =回调和放大器;&安培; typeof运算回调==功能;
    VAR callbackId = hasCallback? NativeBridge.callbacksCount ++:0;    如果(hasCallback)
      NativeBridge.callbacks [callbackId] =回调;    VAR IFRAME =使用document.createElement(IFRAME);
    iframe.setAttribute(SRC,JS框架:+ functionName +:+ callbackId +:+ EN codeURIComponent(JSON.stringify(参数)));
    document.documentElement.appendChild(IFRAME);
    iframe.parentNode.removeChild(IFRAME);
    IFRAME = NULL;  },    日志:日志功能(消息){        VAR IFRAME =使用document.createElement(IFRAME);
        iframe.setAttribute(SRC,ios的日志:+ EN codeURIComponent(JSON.stringify(##iOS版+消息)));
        document.documentElement.appendChild(IFRAME);
        iframe.parentNode.removeChild(IFRAME);
        IFRAME = NULL;    }};


  1. 我们呼吁JavaScript端的HTML端连接()对机体负荷


  2. 一旦我们从startCapture函数接收回调(captureMovieCallback),我们送
    启动消息,表明我们已经准备好接受数据。


  3. 在客观C面服务器分拆块大小的小块的WAV音频数据= 60 * 1023
    并存储阵列。


  4. 发送的第一个块回JavaScript端。


  5. JavaScript的接受这个块,并将下一它从服务器需要的块数。


  6. 服务器发送该数字表示块。此过程被重复,直到我们
    最后一个块发送到JavaScript的。


  7. 在最后我们发送站的消息发回的javascript方表明我们正在做的。它
    显然3字节的大小(这是作为标准打破这种循环。)


  8. 每块存储在阵列小斑点。现在我们创建这些更大的水滴
    使用以下行小斑点

    soundBlob =新的Blob(smallBlobs,{类型:音频/ WAV});

    这BLOB被上传到写此Blob为WAV文件服务器。
    我们可以通过URL来此WAV文件音频标签SRC重播回JavaScript的身边。


  9. 我们关闭BLOB发送到服务器后,WebSocket连接。

    希望这是不够清楚明白。


I am recording a sound file ( wav format) in objective C. I want to pass this back to Javascript using Objective C stringByEvaluatingJavaScriptFromString. I am thinking that I will have to convert wav file to base64 string to pass it to this function. Then I will have to convert base64 string back to (wav/blob) format in javascript to pass it to audio tag to play it. I don't know how can I do that? Also not sure if that is best way to pass wave file back to javascript? Any ideas will be appreciated.

解决方案

well, this was not straight forward as I expected. so here is how I was able to achieve this.

Step 1: I recorded the audio in caf format using AudioRecorder.

NSArray *dirPaths;
NSString *docsDir;

dirPaths = NSSearchPathForDirectoriesInDomains(NSDocumentDirectory, NSUserDomainMask, YES);

docsDir = [dirPaths objectAtIndex:0];

soundFilePath = [docsDir stringByAppendingPathComponent:@"sound.caf"];

NSURL *soundFileURL = [NSURL fileURLWithPath:soundFilePath];

NSDictionary *recordSettings = [NSDictionary dictionaryWithObjectsAndKeys:
    [NSNumber numberWithInt:AVAudioQualityMin],
    AVEncoderAudioQualityKey,
    [NSNumber numberWithInt:16],
    AVEncoderBitRateKey,
    [NSNumber numberWithInt:2],
    AVNumberOfChannelsKey,
    [NSNumber numberWithFloat:44100],
                                AVSampleRateKey,
    nil];

NSError *error = nil;

audioRecorder = [[AVAudioRecorder alloc]
                 initWithURL:soundFileURL
                 settings:recordSettings error:&error];

if(error)
{
    NSLog(@"error: %@", [error localizedDescription]);
} else {
    [audioRecorder prepareToRecord];
}

after this, you just need to call audioRecorder.record to record the audio. it will be recorded in caf format. If you want to see my recordAudio function, then here it is.

  (void) recordAudio
   {
    if(!audioRecorder.recording)
     {
         _playButton.enabled = NO;
         _recordButton.title = @"Stop";
         [audioRecorder record];
         [self animate1:nil finished:nil context:nil];

     }
    else
    {
       [_recordingImage stopAnimating];
       [audioRecorder stop];
       _playButton.enabled = YES;
      _recordButton.title = @"Record";
    }
  }

Step 2: Convert the caf format to wav format. This I was able to perform using following function.

 -(BOOL)exportAssetAsWaveFormat:(NSString*)filePath
{
   NSError *error = nil ;

NSDictionary *audioSetting = [NSDictionary dictionaryWithObjectsAndKeys:
                              [ NSNumber numberWithFloat:44100.0], AVSampleRateKey,
                              [ NSNumber numberWithInt:2], AVNumberOfChannelsKey,
                              [ NSNumber numberWithInt:16], AVLinearPCMBitDepthKey,
                              [ NSNumber numberWithInt:kAudioFormatLinearPCM], AVFormatIDKey,
                              [ NSNumber numberWithBool:NO], AVLinearPCMIsFloatKey,
                              [ NSNumber numberWithBool:0], AVLinearPCMIsBigEndianKey,
                              [ NSNumber numberWithBool:NO], AVLinearPCMIsNonInterleaved,
                              [ NSData data], AVChannelLayoutKey, nil ];

NSString *audioFilePath = filePath;
AVURLAsset * URLAsset = [[AVURLAsset alloc]  initWithURL:[NSURL fileURLWithPath:audioFilePath] options:nil];

if (!URLAsset) return NO ;

AVAssetReader *assetReader = [AVAssetReader assetReaderWithAsset:URLAsset error:&error];
if (error) return NO;

NSArray *tracks = [URLAsset tracksWithMediaType:AVMediaTypeAudio];
if (![tracks count]) return NO;

AVAssetReaderAudioMixOutput *audioMixOutput = [AVAssetReaderAudioMixOutput
                                               assetReaderAudioMixOutputWithAudioTracks:tracks
                                               audioSettings :audioSetting];

if (![assetReader canAddOutput:audioMixOutput]) return NO ;

[assetReader addOutput :audioMixOutput];

if (![assetReader startReading]) return NO;



NSString *title = @"WavConverted";
NSArray *docDirs = NSSearchPathForDirectoriesInDomains (NSDocumentDirectory, NSUserDomainMask, YES);
NSString *docDir = [docDirs objectAtIndex: 0];
NSString *outPath = [[docDir stringByAppendingPathComponent :title]
                     stringByAppendingPathExtension:@"wav" ];

if(![[NSFileManager defaultManager] removeItemAtPath:outPath error:NULL])
{
    return NO;
}

soundFilePath = outPath;

NSURL *outURL = [NSURL fileURLWithPath:outPath];
AVAssetWriter *assetWriter = [AVAssetWriter assetWriterWithURL:outURL
                                                      fileType:AVFileTypeWAVE
                                                         error:&error];
if (error) return NO;

AVAssetWriterInput *assetWriterInput = [ AVAssetWriterInput assetWriterInputWithMediaType :AVMediaTypeAudio
                                                                            outputSettings:audioSetting];
assetWriterInput. expectsMediaDataInRealTime = NO;

if (![assetWriter canAddInput:assetWriterInput]) return NO ;

[assetWriter addInput :assetWriterInput];

if (![assetWriter startWriting]) return NO;


//[assetReader retain];
//[assetWriter retain];

[assetWriter startSessionAtSourceTime:kCMTimeZero ];

dispatch_queue_t queue = dispatch_queue_create( "assetWriterQueue", NULL );

[assetWriterInput requestMediaDataWhenReadyOnQueue:queue usingBlock:^{

    NSLog(@"start");

    while (1)
    {
        if ([assetWriterInput isReadyForMoreMediaData] && (assetReader.status == AVAssetReaderStatusReading)) {

            CMSampleBufferRef sampleBuffer = [audioMixOutput copyNextSampleBuffer];

            if (sampleBuffer) {
                [assetWriterInput appendSampleBuffer :sampleBuffer];
                CFRelease(sampleBuffer);
            } else {
                [assetWriterInput markAsFinished];
                break;
            }
        }
    }

    [assetWriter finishWriting];

    //[self playWavFile];
    NSError *err;
    NSData *audioData = [NSData dataWithContentsOfFile:soundFilePath options: 0 error:&err];
    [self.audioDelegate doneRecording:audioData];
    //[assetReader release ];
    //[assetWriter release ];
    NSLog(@"soundFilePath=%@",soundFilePath);
    NSDictionary *dict = [[NSFileManager defaultManager] attributesOfItemAtPath:soundFilePath error:&err];
    NSLog(@"size of wav file = %@",[dict objectForKey:NSFileSize]);
    //NSLog(@"finish");
}];

well in this function, i am calling audioDelegate function doneRecording with audioData which is in wav format. Here is code for doneRecording.

-(void) doneRecording:(NSData *)contents
{
myContents = [[NSData dataWithData:contents] retain];
[self returnResult:alertCallbackId args:@"Recording Done.",nil];
}

// Call this function when you have results to send back to javascript callbacks
 // callbackId : int comes from handleCall function

// args: list of objects to send to the javascript callback
- (void)returnResult:(int)callbackId args:(id)arg, ...;
{
  if (callbackId==0) return;

  va_list argsList;
  NSMutableArray *resultArray = [[NSMutableArray alloc] init];

  if(arg != nil){
    [resultArray addObject:arg];
    va_start(argsList, arg);
    while((arg = va_arg(argsList, id)) != nil)
      [resultArray addObject:arg];
    va_end(argsList);
  }

   NSString *resultArrayString = [json stringWithObject:resultArray allowScalar:YES error:nil];
   [self performSelectorOnMainThread:@selector(stringByEvaluatingJavaScriptFromString:) withObject:[NSString stringWithFormat:@"NativeBridge.resultForCallback(%d,%@);",callbackId,resultArrayString] waitUntilDone:NO];
   [resultArray release];    
}

Step 3: Now it is time to communicate back to javascript inside UIWebView that we are done recording the audio so you can start accepting data in blocks from us. I am using websockets to transfer data back to javascript. The data will be transferred in blocks because server(https://github.com/benlodotcom/BLWebSocketsServer) that I was using, was build using libwebsockets(http://git.warmcat.com/cgi-bin/cgit/libwebsockets/).

This is how you start the server in delegate class.

- (id)initWithFrame:(CGRect)frame 
{
  if (self = [super initWithFrame:frame]) {

      [self _createServer];
      [self.server start];
      myContents = [NSData data];

    // Set delegate in order to "shouldStartLoadWithRequest" to be called
    self.delegate = self;

    // Set non-opaque in order to make "body{background-color:transparent}" working!
    self.opaque = NO;

    // Instanciate JSON parser library
    json = [ SBJSON new ];

    // load our html file
    NSString *path = [[NSBundle mainBundle] pathForResource:@"webview-document" ofType:@"html"];
    [self loadRequest:[NSURLRequest requestWithURL:[NSURL fileURLWithPath:path]]];



  }
  return self;
}
-(void) _createServer
{
    /*Create a simple echo server*/
    self.server = [[BLWebSocketsServer alloc] initWithPort:9000 andProtocolName:echoProtocol];
    [self.server setHandleRequestBlock:^NSData *(NSData *data) {

        NSString *convertedString = [[NSString alloc] initWithData:data encoding:NSUTF8StringEncoding];
        NSLog(@"Received Request...%@",convertedString);

        if([convertedString isEqualToString:@"start"])
        {
            NSLog(@"myContents size: %d",[myContents length]);

            int contentSize = [myContents length];
            int chunkSize = 64*1023;
            chunksCount = ([myContents length]/(64*1023))+1;

            NSLog(@"ChunkSize=%d",chunkSize);
            NSLog(@"chunksCount=%d",chunksCount);

            chunksArray =  [[NSMutableArray array] retain];

            int index = 0;
            //NSRange chunkRange;

            for(int i=1;i<=chunksCount;i++)
            {

                if(i==chunksCount)
                {
                    NSRange chunkRange = {index,contentSize-index};
                    NSLog(@"chunk# = %d, chunkRange=(%d,%d)",i,index,contentSize-index);
                    NSData *dataChunk = [myContents subdataWithRange:chunkRange];
                    [chunksArray addObject:dataChunk];
                    break;
                }
                else
                {
                    NSRange chunkRange = {index, chunkSize};
                    NSLog(@"chunk# = %d, chunkRange=(%d,%d)",i,index,chunkSize);
                    NSData *dataChunk = [myContents subdataWithRange:chunkRange];
                    index += chunkSize;
                    [chunksArray addObject:dataChunk];
                }
            }

            return [chunksArray objectAtIndex:0];

        }
        else
        {
            int chunkNumber = [convertedString intValue];

            if(chunkNumber>0 && (chunkNumber+1)<=chunksCount)
            {
                return [chunksArray objectAtIndex:(chunkNumber)];
            }


        }

        NSLog(@"Releasing Array");
        [chunksArray release];
        chunksCount = 0;
        return [NSData dataWithBase64EncodedString:@"Stop"];
    }];
}

code on javascript side is

var socket;
var chunkCount = 0;
var soundBlob, soundUrl;
var smallBlobs = new Array();

function captureMovieCallback(response)
{
    if(socket)
    {
        try{
            socket.send('start');
        }
        catch(e)
        {
            log('Socket is not valid object');
        }

    }
    else
    {
        log('socket is null');
    }
}

function closeSocket(response)
{
    socket.close();
}


function connect(){
    try{
        window.WebSocket = window.WebSocket || window.MozWebSocket;

        socket = new WebSocket('ws://127.0.0.1:9000',
                                      'echo-protocol');

        socket.onopen = function(){
        }

        socket.onmessage = function(e){
            var data = e.data;
            if(e.data instanceof ArrayBuffer)
            {
                log('its arrayBuffer');
            }
            else if(e.data instanceof Blob)
            {
                if(soundBlob)
                   log('its Blob of size = '+ e.data.size + ' final blob size:'+ soundBlob.size);

                if(e.data.size != 3)
                {
                    //log('its Blob of size = '+ e.data.size);
                    smallBlobs[chunkCount]= e.data;
                    chunkCount = chunkCount +1;
                    socket.send(''+chunkCount);
                }
                else
                {
                    //alert('End Received');
                    try{
                    soundBlob = new Blob(smallBlobs,{ "type" : "audio/wav" });
                    var myURL = window.URL || window.webkitURL;
                    soundUrl = myURL.createObjectURL(soundBlob);
                    log('soundURL='+soundUrl);
                    }
                    catch(e)
                    {
                        log('Problem creating blob and url.');
                    }

                    try{
                        var serverUrl = 'http://10.44.45.74:8080/MyTestProject/WebRecording?record';
                        var xhr = new XMLHttpRequest();
                        xhr.open('POST',serverUrl,true);
                        xhr.setRequestHeader("content-type","multipart/form-data");
                        xhr.send(soundBlob);
                    }
                    catch(e)
                    {
                        log('error uploading blob file');
                    }

                    socket.close();
                }

                //alert(JSON.stringify(msg, null, 4));
            }
            else
            {
                log('dont know');
            }
        }

        socket.onclose = function(){
            //message('<p class="event">Socket Status: '+socket.readyState+' (Closed)');
            log('final blob size:'+soundBlob.size);
        }

    } catch(exception){
       log('<p>Error: '+exception);
    }
}

function log(msg) {
    NativeBridge.log(msg);
}
function stopCapture() {
    NativeBridge.call("stopMovie", null,null);
}

function startCapture() {
    NativeBridge.call("captureMovie",null,captureMovieCallback);
}

NativeBridge.js

var NativeBridge = {
  callbacksCount : 1,
  callbacks : {},

  // Automatically called by native layer when a result is available
  resultForCallback : function resultForCallback(callbackId, resultArray) {
    try {


    var callback = NativeBridge.callbacks[callbackId];
    if (!callback) return;
    console.log("calling callback for "+callbackId);
    callback.apply(null,resultArray);
    } catch(e) {alert(e)}
  },

  // Use this in javascript to request native objective-c code
  // functionName : string (I think the name is explicit :p)
  // args : array of arguments
  // callback : function with n-arguments that is going to be called when the native code returned
  call : function call(functionName, args, callback) {

    //alert("call");
    //alert('callback='+callback);
    var hasCallback = callback && typeof callback == "function";
    var callbackId = hasCallback ? NativeBridge.callbacksCount++ : 0;

    if (hasCallback)
      NativeBridge.callbacks[callbackId] = callback;

    var iframe = document.createElement("IFRAME");
    iframe.setAttribute("src", "js-frame:" + functionName + ":" + callbackId+ ":" + encodeURIComponent(JSON.stringify(args)));
    document.documentElement.appendChild(iframe);
    iframe.parentNode.removeChild(iframe);
    iframe = null;

  },

    log : function log(message) {

        var iframe = document.createElement("IFRAME");
        iframe.setAttribute("src", "ios-log:"+encodeURIComponent(JSON.stringify("#iOS#" + message)));
        document.documentElement.appendChild(iframe);
        iframe.parentNode.removeChild(iframe);
        iframe = null;

    }

};

  1. we call connect() on javascript side on body load in html side

  2. Once we receive callback(captureMovieCallback) from startCapture function, we send start message indicating that we are ready to accept the data.

  3. server on objective c side splits the wav audio data in small chunks of chunksize=60*1023 and stores in array.

  4. sends the first block back to javascript side.

  5. javascript accepts this block and sends the number of next block that it need from server.

  6. server sends block indicated by this number. This process is repeated untill we send the last block to javascript.

  7. At the last we send stop message back to javascript side indicating that we are done. it is apparently 3 bytes in size ( which is used as criteria to break this loop.)

  8. Every block is stored as small blob in array. Now we create a bigger blobs from these small blobs using following line

    soundBlob = new Blob(smallBlobs,{ "type" : "audio/wav" });

    This blob is uploaded to server which writes this blob as wav file. we can pass url to this wav file as src of audio tag to replay it back on javascript side.

  9. we close the websocket connection after sending blob to server.

    Hope this is clear enough to understand.

这篇关于从目标C通过声音(WAV)文件的JavaScript的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆