如何使用Swift在NSOpenGLView中绘制图像? [英] How to Draw an Image in an NSOpenGLView with Swift?
问题描述
基本上,我想创建一个使用OPenGL进行渲染的ImageView.我最终的计划是将其用作具有CIFilters的视频播放器的基础.
Basically, I want to create an ImageView which uses OPenGL for rendering. My eventual plan is to use this as a base for a video player with CIFilters.
我遵循了教程,其中强调了关于使用OpenGL技术来利用GPU的信息.本教程适用于iOS.我将其映射到可可粉.
I followed a tutorial which emphasized on using OpenGL technology to take advantage of GPU. The tutorial was for iOS. I mapped it to Cocoa.
我不知道哪里出故障了,但是我得到的只是一个空白屏幕.
I have no idea where I am failing, but all I get is a blank screen.
这是视图.
import Cocoa
import OpenGL.GL3
class CoreImageView: NSOpenGLView {
var coreImageContext: CIContext?
var image: CIImage? {
didSet {
display()
}
}
override init?(frame frameRect: NSRect, pixelFormat format: NSOpenGLPixelFormat?) {
//Bad programming - Code duplication
let attrs: [NSOpenGLPixelFormatAttribute] = [
UInt32(NSOpenGLPFAAccelerated),
UInt32(NSOpenGLPFAColorSize), UInt32(32),
UInt32(NSOpenGLPFAOpenGLProfile),
UInt32( NSOpenGLProfileVersion3_2Core),
UInt32(0)
]
let pf = NSOpenGLPixelFormat(attributes: attrs)
super.init(frame: frameRect, pixelFormat: pf)
}
required init?(coder: NSCoder) {
fatalError("init(coder:) has not been implemented")
}
override init(frame: CGRect) {
super.init(frame: frame)
initialize()
}
//Bad programming - Code duplication
func defaultPixelFormat()->NSOpenGLPixelFormat?{
let attrs: [NSOpenGLPixelFormatAttribute] = [
UInt32(NSOpenGLPFAAccelerated),
UInt32(NSOpenGLPFAColorSize), UInt32(32),
UInt32(NSOpenGLPFAOpenGLProfile),
UInt32( NSOpenGLProfileVersion3_2Core),
UInt32(0)
]
return NSOpenGLPixelFormat(attributes: attrs)
}
func initialize(){
guard let pf = defaultPixelFormat() else {
Swift.print("pixelFormat could not be constructed")
return
}
self.pixelFormat = pf
guard let context = NSOpenGLContext(format: pf, share: nil) else {
Swift.print("context could not be constructed")
return
}
self.openGLContext = context
if let cglContext = context.cglContextObj {
coreImageContext = CIContext(cglContext: cglContext, pixelFormat: pixelFormat?.cglPixelFormatObj, colorSpace: nil, options: nil)
}else{
Swift.print("cglContext could not be constructed")
coreImageContext = CIContext(options: nil)
}
}
//--------------------------
override func draw(_ dirtyRect: NSRect) {
if let img = image {
let scale = self.window?.screen?.backingScaleFactor ?? 1.0
let destRect = bounds.applying(CGAffineTransform(scaleX: scale, y: scale))
coreImageContext?.draw(img, in: destRect, from: img.extent)
}
}
}
感谢您的帮助.完整的项目是此处(XCode 8)和
Any help is appreciated. Complete project is here (XCode 8) and here(Xcode 7)
推荐答案
我可能建议您查看Simon的Core Image helper -他在github上有此东西,它基本上告诉核心图像使用OpenGLES通过GPU渲染. 2.0上下文.当我试图弄清楚如何通过GPU进行渲染时,这对我真的很有帮助-不转移到CPU进行渲染是一个非常好的主意,因为这种传输需要很长时间(相对).
I might suggest checking out Simon's Core Image helper on this -- he has this thing on his github which basically tells core image to render via the GPU using an OpenGLES 2.0 context. It was really helpful for me when I was trying to figure out how to render via GPU -- its a really good idea to not transfer to the CPU to render because that transfer takes a long time (relatively).
https://github.com/FlexMonkey/CoreImageHelpers
这篇关于如何使用Swift在NSOpenGLView中绘制图像?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!