在iOS中用白色像素替换部分像素缓冲区 [英] Replace Part of Pixel Buffer with White Pixels in iOS

查看:110
本文介绍了在iOS中用白色像素替换部分像素缓冲区的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我正在使用iPhone相机捕获实时视频,并将像素缓冲区馈送到进行某些对象识别的网络.以下是相关代码:(我不会发布用于设置AVCaptureSession的代码 等,因为这是非常标准的.)

I am using the iPhone camera to capture live video and feeding the pixel buffer to a network that does some object recognition. Here is the relevant code: (I won't post the code for setting up the AVCaptureSession etc. as this is pretty standard.)

- (void)captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection {
    CVPixelBufferRef pixelBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);

    OSType sourcePixelFormat = CVPixelBufferGetPixelFormatType( pixelBuffer );
    int doReverseChannels;
    if ( kCVPixelFormatType_32ARGB == sourcePixelFormat ) {
        doReverseChannels = 1;
    } else if ( kCVPixelFormatType_32BGRA == sourcePixelFormat ) {
        doReverseChannels = 0;
    } else {
        assert(false);
    }

    const int sourceRowBytes = (int)CVPixelBufferGetBytesPerRow( pixelBuffer );
    const int width = (int)CVPixelBufferGetWidth( pixelBuffer );
    const int fullHeight = (int)CVPixelBufferGetHeight( pixelBuffer );
    CVPixelBufferLockBaseAddress( pixelBuffer, 0 );
    unsigned char* sourceBaseAddr = CVPixelBufferGetBaseAddress( pixelBuffer );
    int height;
    unsigned char* sourceStartAddr;
    if (fullHeight <= width) {
        height = fullHeight;
        sourceStartAddr = sourceBaseAddr;
    } else {
        height = width;
        const int marginY = ((fullHeight - width) / 2);
        sourceStartAddr = (sourceBaseAddr + (marginY * sourceRowBytes));
    }
}

然后,网络采用sourceStartAddrwidthheightsourceRowBytes& doReverseChannels作为输入.

The network then takes sourceStartAddr, width, height, sourceRowBytes & doReverseChannels as inputs.

我的问题如下:用所有白色像素"替换或删除部分图像数据的最简单和/或最有效的方法是什么?可以直接覆盖像素缓冲区数据的e部分吗?如果可以,怎么办?

My question is the following: What would be the simplest and/or most efficient way to replace or delete a part of the image data with all white 'pixels'? Is it possible to directly overwrite e portion of the pixel buffer data and if yes how?

对于此像素缓冲区的工作原理,我只有非常基本的了解,因此,如果这里缺少一些非常基本的内容,我深表歉意.我在Stackoverflow上发现的与我的关系最密切的问题是这个,其中EAGLContext用于将文本添加到视频帧.虽然这实际上可以实现我的目标,只需要替换单个图像,但我认为如果将此步骤应用于每个视频帧,则会降低性能,并且我想知道是否还有另一种方法.任何帮助,我们将不胜感激.

I only have a very rudimentary understanding of how this pixel buffer works, so I apologize if I'm missing something very basic here. The question most closely related to mine I found on Stackoverflow was this one, where a EAGLContext is used to add text to a video frame. While this would actually work for my objective which only needs this replacement for single images, I assume this step would kill performance if applied to every video frame, and I would like to find out if there is another method. Any help here would be appreciated.

推荐答案

这是一种操作CVPixelBufferRef的简便方法,而无需使用其他库(例如Core Graphics或OpenGL):

Here is an easy way to manipulate a CVPixelBufferRef without using other libraries like Core Graphics or OpenGL:

- (void)captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection {
    CVPixelBufferRef pixelBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);

    const int kBytesPerPixel = 4;
    CVPixelBufferLockBaseAddress( pixelBuffer, 0 );
    int bufferWidth = (int)CVPixelBufferGetWidth( pixelBuffer );
    int bufferHeight = (int)CVPixelBufferGetHeight( pixelBuffer );
    size_t bytesPerRow = CVPixelBufferGetBytesPerRow( pixelBuffer );
    uint8_t *baseAddress = CVPixelBufferGetBaseAddress( pixelBuffer );

    for ( int row = 0; row < bufferHeight; row++ )
    {
        uint8_t *pixel = baseAddress + row * bytesPerRow;
        for ( int column = 0; column < bufferWidth; column++ )
        {
            if ((row < 100) && (column < 100) {
                pixel[0] = 255; // BGRA, Blue value
                pixel[1] = 255; // Green value
                pixel[2] = 255; // Red value
            }
            pixel += kBytesPerPixel;
        }
    }

    CVPixelBufferUnlockBaseAddress( pixelBuffer, 0 );

    // Do whatever needs to be done with the pixel buffer
}

这会用白色像素覆盖图像中左上方100 x 100像素的色块.

This overwrites the top left patch of 100 x 100 pixels in the image with white pixels.

我在名为 RosyWriter .

有点惊讶,考虑到结果有多容易,我在这里没有得到任何答案.希望这对某人有帮助.

Kind of amazed I didn't get any answers here considering how easy this turned out to be. Hope this helps someone.

这篇关于在iOS中用白色像素替换部分像素缓冲区的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆