录制视频上的iPhone水印。 [英] iPhone Watermark on recorded Video.
问题描述
在我的应用程序中,我需要捕获视频并在该视频上添加水印。水印应为文本(时间和注释)。我看到使用QTKitFrame工作的代码。但是我读到该框架不适用于iPhone。
In my Application I need to capture a video and Put a watermark on that video. The watermark should be Text(Time and Notes). I saw a code using "QTKit" Frame work. However I read that the framework is not available for iPhone.
先谢谢。
推荐答案
使用 AVFoundation
。我建议用 AVCaptureVideoDataOutput
抓取帧,然后用水印图像覆盖捕获的帧,最后将捕获和处理的帧写入文件用户 AVAssetWriter
。
Use AVFoundation
. I would suggest grabbing frames with AVCaptureVideoDataOutput
, then overlaying the captured frame with the watermark image, and finally writing captured and processed frames to a file user AVAssetWriter
.
搜索堆栈溢出,有很多很棒的例子详细介绍了如何完成我提到的每一件事。我没有看到任何代码示例来说明你想要的效果,但是你应该可以很容易地混合和匹配。
Search around stack overflow, there are a ton of fantastic examples detailing how to do each of these things I have mentioned. I haven't seen any that give code examples for exactly the effect you would like, but you should be able to mix and match pretty easily.
编辑:
看看这些链接:
iPhone:AVCaptureSession捕获输出崩溃(AVCaptureVideoDataOutput) - 这篇文章可能仅仅包含相关代码的帮助。
iPhone: AVCaptureSession capture output crashing (AVCaptureVideoDataOutput) - this post might be helpful just by nature of containing relevant code.
AVCaptureDataOutput
将以 CMSampleBufferRef
s的形式返回图像。
使用以下代码将它们转换为 CGImageRef
s:
AVCaptureDataOutput
will return images as CMSampleBufferRef
s.
Convert them to CGImageRef
s using this code:
- (CGImageRef) imageFromSampleBuffer:(CMSampleBufferRef) sampleBuffer // Create a CGImageRef from sample buffer data
{
CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
CVPixelBufferLockBaseAddress(imageBuffer,0); // Lock the image buffer
uint8_t *baseAddress = (uint8_t *)CVPixelBufferGetBaseAddressOfPlane(imageBuffer, 0); // Get information of the image
size_t bytesPerRow = CVPixelBufferGetBytesPerRow(imageBuffer);
size_t width = CVPixelBufferGetWidth(imageBuffer);
size_t height = CVPixelBufferGetHeight(imageBuffer);
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
CGContextRef newContext = CGBitmapContextCreate(baseAddress, width, height, 8, bytesPerRow, colorSpace, kCGBitmapByteOrder32Little | kCGImageAlphaPremultipliedFirst);
CGImageRef newImage = CGBitmapContextCreateImage(newContext);
CGContextRelease(newContext);
CGColorSpaceRelease(colorSpace);
CVPixelBufferUnlockBaseAddress(imageBuffer,0);
/* CVBufferRelease(imageBuffer); */ // do not call this!
return newImage;
}
从那里你将转换为UIImage,
From there you would convert to a UIImage,
UIImage *img = [UIImage imageWithCGImage:yourCGImage];
然后使用
[img drawInRect:CGRectMake(x,y,height,width)];
将帧绘制到上下文,在其上绘制水印的PNG,然后添加使用 AVAssetWriter
将处理后的图像处理为输出视频。我建议实时添加它们,这样你就不会用大量的UIImages来填充内存。
to draw the frame to a context, draw a PNG of the watermark over it, and then add the processed images to your output video using AVAssetWriter
. I would suggest adding them in real time so you're not filling up memory with tons of UIImages.
如何将UIImage数组导出为电影? - 这篇文章展示了如何将已经处理的UIImages添加到视频中给定的持续时间。
How do I export UIImage array as a movie? - this post shows how to add the UIImages you have processed to a video for a given duration.
这样可以帮助您顺利为视频添加水印。请记住练习良好的内存管理,因为以20-30fps的速度泄漏图像是使应用程序崩溃的好方法。
This should get you well on your way to watermarking your videos. Remember to practice good memory management, because leaking images that are coming in at 20-30fps is a great way to crash the app.
这篇关于录制视频上的iPhone水印。的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!