iOS绘图屏幕视频捕获不流畅 [英] iOS drawing screen video capture not smooth

查看:39
本文介绍了iOS绘图屏幕视频捕获不流畅的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我正在创建一个应用程序,我们可以在其中使用手指在 imageView 中进行绘制,同时我们还可以记录屏幕.目前这些功能我都做了,但问题是一旦视频录制完成,如果我们播放录制的视频,视频中的手指绘制不流畅.

I am creating an application in which we can able to draw using our finger in an imageView , the same time we can record the screen also. I have done these features so far , but the problem is once the video recording is completed , if we play the recorded video the finger drawing is not smooth in video.

我没有使用 opengl ,绘图在 UIImageView 上,每 0.01 秒我们从 UIImageView 捕获图像并将像素缓冲区附加到 AVAssetWriterInputPixelBufferAdaptor 对象.

I am not using opengl , the drawing is on UIImageView and on every 0.01 sec we capture th image from UIImageView and append the pixel buffer to the AVAssetWriterInputPixelBufferAdaptor object .

这是我用来将 UIImage 转换为缓冲区的代码

Here is the code I used for converting the UIImage into buffer

- (CVPixelBufferRef) pixelBufferFromCGImage:(CGImageRef) image {

  CGSize frameSize = CGSizeMake(976, 667);
NSDictionary *options = [NSDictionary dictionaryWithObjectsAndKeys:
                         [NSNumber numberWithBool:YES], kCVPixelBufferCGImageCompatibilityKey,
                         [NSNumber numberWithBool:YES], kCVPixelBufferCGBitmapContextCompatibilityKey,
                         nil];
CVPixelBufferRef pxbuffer = NULL;
CVPixelBufferCreate(kCFAllocatorDefault, frameSize.width,
                                      frameSize.height,  kCVPixelFormatType_32ARGB, (__bridge CFDictionaryRef) options,
                                      &pxbuffer);
CVPixelBufferLockBaseAddress(pxbuffer, 0);
void *pxdata = CVPixelBufferGetBaseAddress(pxbuffer);

CGColorSpaceRef rgbColorSpace = CGImageGetColorSpace(image);

CGContextRef context = CGBitmapContextCreate(pxdata, frameSize.width,
                                             frameSize.height, 8, 4*frameSize.width, rgbColorSpace,
                                             kCGImageAlphaPremultipliedFirst);

CGContextDrawImage(context, CGRectMake(0, 0, CGImageGetWidth(image),
                                       CGImageGetHeight(image)), image);

CGContextRelease(context);

CVPixelBufferUnlockBaseAddress(pxbuffer, 0);
return pxbuffer;

}

下面的方法以 0.01 秒的时间间隔调用

the below method is calling on 0.01 sec timeinterval

CVPixelBufferRef pixelBufferX  = (CVPixelBufferRef)[self pixelBufferFromCGImage:theIM];
bValue = [self.avAdaptor appendPixelBuffer:pixelBufferX withPresentationTime:presentTime];

有人可以指导改进视频捕获吗?

Can any one guide for the improvement in video capture ?

提前致谢

推荐答案

您不应该通过每 0.01 秒调用一次来显示内容.如果您想与视频保持同步,请参阅 AVSynchronizedLayer,这是明确的.或者,请参阅 CADisplayLink,它用于与屏幕刷新保持同步.0.01 秒与任何特别的东西都不相符,并且您可能会在与视频和显示器不同步的地方获得节拍.在任何情况下,您都应该在播放器的某个回调中进行绘图,而不是使用计时器.

You shouldn't display things by calling them every 0.01 seconds. If you want to stay in sync with video, see AVSynchronizedLayer, which is explicitly for this. Alternately, see CADisplayLink, which is for staying in sync with screen refreshes. 0.01 seconds doesn't line up with anything in particular, and you're probably getting beats where you're out of sync with the video and with the display. In any case, you should be doing your drawing in some callback from your player, not with a timer.

您还在每个循环中泄漏像素缓冲区.由于您调用了 CVPixelBufferCreate(),您负责最终在生成的像素缓冲区上调用 CFRelease().如果运行一段时间,我希望您的程序最终会因内存不足而崩溃.

You are also leaking your pixel buffer in every loop. Since you called CVPixelBufferCreate(), you're responsible for eventually calling CFRelease() on the resulting pixel buffer. I would expect your program to eventually crash by running out of memory if this ran for awhile.

确保您已经研究了AV 基础编程指南,让您了解媒体播放中的所有部分是如何组合在一起的.

Make sure you've studied the AV Foundation Programming Guide so you know how all the pieces fit together in media playback.

这篇关于iOS绘图屏幕视频捕获不流畅的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆