我应该使用NSOperation还是NSRunLoop? [英] Should I use NSOperation or NSRunLoop?

查看:415
本文介绍了我应该使用NSOperation还是NSRunLoop?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我想要监控来自FireWire相机的视频输出流。我创建了一个界面生成器界面,其中包含按钮和 NSImageView 。虽然图像监控是在无限循环中进行,但我想:




  • 随时更改相机参数(增益,伽玛等)

  • 让监控停止,以便我将图片保存到文件(设置停止while循环的标志)



使用按钮功能,我已经无法循环视频帧监视器,而仍在寻找一个按钮按钮(很像使用来自C的按键功能)。两个选项本身: p>


  1. 启动一个新的运行循环(为此,我无法获得一个autoreleasepool函数...)

  2. 启动 NSOperation - 如何以允许我连接Xcode按钮的方式来执行此操作?

文档对于创建这样的对象非常钝。如果我根据我发现的例子创建一个 NSOperation ,似乎没有办法与它与一个来自Interface Builder的对象进行通信。当我创建一个 NSRunLoop ,我得到一个对象泄漏的错误,我可以找不到如何创建一个autoreleasepool实际响应我创建的RunLoop的例子。不要害怕,我甚至没有尝试选择哪些对象被二次运行循环采样...



因为Objective C是(显然!)不是我的母语,我正在寻找有婴儿步骤的解决方案,对不起说...
提前感谢

解决方案

需要做与几乎完全相同的,只有与来自FireWire相机的连续视频显示。在我的示例中,我使用 libdc1394库对我们的网站执行帧捕获和相机属性调整火线相机。我知道你也可以使用一些Carbon Quicktime函数,但我发现libdc1394有点容易理解。



对于视频捕获循环,我试过许多不同的方法,从一个单独的线程,轮询摄像机和锁定共享资源,使用一个NSOperationQueue与相机的交互,最终结算使用CVDisplayLink轮询相机的方式匹配刷新率



使用以下代码配置CVDisplayLink:

  CGDirectDisplayID displayID = CGMainDisplayID(); 
CVReturn error = kCVReturnSuccess;
error = CVDisplayLinkCreateWithCGDisplay(displayID,& displayLink);
if(error)
{
NSLog(@DisplayLink created with error:%d,error);
displayLink = NULL;
}
CVDisplayLinkSetOutputCallback(displayLink,renderCallback,self);

,它会调用以下函数来触发新相机框架的检索:



static CVReturn renderCallback(CVDisplayLinkRef displayLink,
const CVTimeStamp * inNow,
const CVTimeStamp * inOutputTime,
CVOptionFlags flagsIn ,
CVOptionFlags * flagsOut,
void * displayLinkContext)
{
return [(SPVideoView *)displayLinkContext renderTime:inOutputTime];
}

CVDisplayLink使用以下命令启动和停止:

   - (void)startRequestingFrames; 
{
CVDisplayLinkStart(displayLink);
}

- (void)stopRequestingFrames;
{
CVDisplayLinkStop(displayLink);
}

而不是使用FireWire摄像机通讯上的锁,曝光,增益等。我更改相应的实例变量,并在标志变量中设置适当的位,以指示要更改哪些设置。在下一次检索帧时,CVDisplayLink的回调方法会更改摄像机上的相应设置,以匹配本地存储的实例变量,并清除该标志。



显示屏幕通过一个NSOpenGLView(CAOpenGLLayer引入了太多的视觉工件,当以这种速度更新,并且其更新回调在主线程上运行)处理。 Apple有一些您可以使用扩展程序将这些框架作为纹理使用DMA来提高性能。



不幸的是,这里描述的是介绍性的东西。我在我们的软件中有大约2,000行代码用于这些相机处理功能,这需要很长时间来解决。如果苹果可以添加手动相机设置调整到QTKit Capture API,我可以删除几乎所有这一切。


I am trying to monitor a stream of video output from a FireWire camera. I have created an Interface Builder interface with buttons and an NSImageView. While image monitoring is occurring within an endless loop, I want to:

  • change some camera parameters on the fly (gain, gamma, etc.)
  • tell the monitoring to stop so I can save an image to a file (set a flag that stops the while loop)

Using the button features, I have been unable to loop the video frame monitor, while still looking for a button press (much like using the keypressed feature from C.) Two options present themselves:

  1. Initiate a new run loop (for which I cannot get an autoreleasepool to function ...)
  2. Initiate an NSOperation - how do I do this in a way which allows me to connect with an Xcode button push?

The documentation is very obtuse about the creation of such objects. If I create an NSOperation as per the examples I've found, there seems to be no way to communicate with it with an object from Interface Builder. When I create an NSRunLoop, I get an object leak error, and I can find no example of how to create an autoreleasepool that actually responds to the RunLoop I've created. Nevermind that I haven't even attempted to choose which objects get sampled by the secondary run loop ...

Because Objective C is (obviously!) not my native tongue, I am looking for solutions with baby steps, sorry to say ... Thanks in advance

解决方案

I've needed to do almost exactly the same as you, only with a continuous video display from the FireWire camera. In my case, I used the libdc1394 library to perform the frame capture and camera property adjustment for our FireWire cameras. I know you can also do this using some of the Carbon Quicktime functions, but I found libdc1394 to be a little easier to understand.

For the video capture loop, I tried a number of different approaches, from a separate thread that polls the camera and has locks around shared resources, to using one NSOperationQueue for interaction with the camera, and finally settled on using a CVDisplayLink to poll the camera in a way that matches the refresh rate of the screen.

The CVDisplayLink is configured using the following code:

CGDirectDisplayID   displayID = CGMainDisplayID();  
CVReturn            error = kCVReturnSuccess;
error = CVDisplayLinkCreateWithCGDisplay(displayID, &displayLink);
if (error)
{
    NSLog(@"DisplayLink created with error:%d", error);
    displayLink = NULL;
}
CVDisplayLinkSetOutputCallback(displayLink, renderCallback, self);  

and it calls the following function to trigger the retrieval of a new camera frame:

static CVReturn renderCallback(CVDisplayLinkRef displayLink, 
                               const CVTimeStamp *inNow, 
                               const CVTimeStamp *inOutputTime, 
                               CVOptionFlags flagsIn, 
                               CVOptionFlags *flagsOut, 
                               void *displayLinkContext)
{
    return [(SPVideoView *)displayLinkContext renderTime:inOutputTime];
}

The CVDisplayLink is started and stopped using the following:

- (void)startRequestingFrames;
{
    CVDisplayLinkStart(displayLink);    
}

- (void)stopRequestingFrames;
{
    CVDisplayLinkStop(displayLink);
}

Rather than using a lock on the FireWire camera communications, whenever I need to adjust the exposure, gain, etc. I change corresponding instance variables and set the appropriate bits within a flag variable to indicate which settings to change. On the next retrieval of a frame, the callback method from the CVDisplayLink changes the appropriate settings on the camera to match the locally stored instance variables and clears that flag.

Display to the screen is handled through an NSOpenGLView (CAOpenGLLayer introduced too many visual artifacts when updating at this rate, and its update callbacks ran on the main thread). Apple has some extensions you can use to provide these frames as textures using DMA for better performance.

Unfortunately, nothing that I've described here is introductory-level stuff. I have about 2,000 lines of code for these camera-handling functions in our software and this took a long time to puzzle out. If Apple could add the manual camera settings adjustments to the QTKit Capture APIs, I could remove almost all of this.

这篇关于我应该使用NSOperation还是NSRunLoop?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆