录制的视频上的 iPhone 水印. [英] iPhone Watermark on recorded Video.

查看:35
本文介绍了录制的视频上的 iPhone 水印.的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

在我的应用程序中,我需要捕获一个视频并在该视频上添加水印.水印应该是文本(时间和注释).我看到了一个使用QTKit"框架工作的代码.但是我读到该框架不适用于 iPhone.

In my Application I need to capture a video and Put a watermark on that video. The watermark should be Text(Time and Notes). I saw a code using "QTKit" Frame work. However I read that the framework is not available for iPhone.

提前致谢.

推荐答案

使用 AVFoundation.我建议使用 AVCaptureVideoDataOutput 抓取帧,然后用水印图像覆盖捕获的帧,最后将捕获和处理的帧写入文件用户 AVAssetWriter.

Use AVFoundation. I would suggest grabbing frames with AVCaptureVideoDataOutput, then overlaying the captured frame with the watermark image, and finally writing captured and processed frames to a file user AVAssetWriter.

搜索堆栈溢出,有很多很棒的例子详细说明了如何做我提到的每件事.我还没有看到任何提供代码示例的代码示例,以完全达到您想要的效果,但您应该能够非常轻松地混合和匹配.

Search around stack overflow, there are a ton of fantastic examples detailing how to do each of these things I have mentioned. I haven't seen any that give code examples for exactly the effect you would like, but you should be able to mix and match pretty easily.

看看这些链接:

iPhone:AVCaptureSession 捕获输出崩溃 (AVCaptureVideoDataOutput) - 这篇文章可能仅就包含相关代码的性质而言是有帮助的.

iPhone: AVCaptureSession capture output crashing (AVCaptureVideoDataOutput) - this post might be helpful just by nature of containing relevant code.

AVCaptureDataOutput 将返回图像作为 CMSampleBufferRefs.使用以下代码将它们转换为 CGImageRefs:

AVCaptureDataOutput will return images as CMSampleBufferRefs. Convert them to CGImageRefs using this code:

    - (CGImageRef) imageFromSampleBuffer:(CMSampleBufferRef) sampleBuffer // Create a CGImageRef from sample buffer data
{

    CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer); 
    CVPixelBufferLockBaseAddress(imageBuffer,0);        // Lock the image buffer 

    uint8_t *baseAddress = (uint8_t *)CVPixelBufferGetBaseAddressOfPlane(imageBuffer, 0);   // Get information of the image 
    size_t bytesPerRow = CVPixelBufferGetBytesPerRow(imageBuffer); 
    size_t width = CVPixelBufferGetWidth(imageBuffer); 
    size_t height = CVPixelBufferGetHeight(imageBuffer); 
    CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB(); 

    CGContextRef newContext = CGBitmapContextCreate(baseAddress, width, height, 8, bytesPerRow, colorSpace, kCGBitmapByteOrder32Little | kCGImageAlphaPremultipliedFirst); 
    CGImageRef newImage = CGBitmapContextCreateImage(newContext); 
    CGContextRelease(newContext); 

    CGColorSpaceRelease(colorSpace); 
    CVPixelBufferUnlockBaseAddress(imageBuffer,0); 
    /* CVBufferRelease(imageBuffer); */  // do not call this!

    return newImage;
}

从那里您将转换为 UIImage,

From there you would convert to a UIImage,

  UIImage *img = [UIImage imageWithCGImage:yourCGImage];  

然后使用

[img drawInRect:CGRectMake(x,y,height,width)]; 

要将框架绘制到上下文,在其上绘制水印的 PNG,然后使用 AVAssetWriter 将处理过的图像添加到输出视频中.我建议实时添加它们,这样您就不会用大量 UIImages 填满内存.

to draw the frame to a context, draw a PNG of the watermark over it, and then add the processed images to your output video using AVAssetWriter. I would suggest adding them in real time so you're not filling up memory with tons of UIImages.

如何将 UIImage 数组导出为电影? - 这篇文章展示了如何在给定的持续时间内将您处理过的 UIImage 添加到视频中.

How do I export UIImage array as a movie? - this post shows how to add the UIImages you have processed to a video for a given duration.

这应该能让您顺利地为视频添加水印.请记住练习良好的内存管理,因为以 20-30fps 的速度泄漏图像是导致应用崩溃的好方法.

This should get you well on your way to watermarking your videos. Remember to practice good memory management, because leaking images that are coming in at 20-30fps is a great way to crash the app.

这篇关于录制的视频上的 iPhone 水印.的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆