Python 在 OS X 中获取屏幕像素值 [英] Python Get Screen Pixel Value in OS X

查看:23
本文介绍了Python 在 OS X 中获取屏幕像素值的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我正在 OS X 10.8.2 上用 Python 构建自动化游戏机器人,在研究 Python GUI 自动化的过程中,我发现了 autopy.鼠标操作 API 很棒,但屏幕捕获方法似乎依赖于已弃用的 OpenGL 方法...

I'm in the process of building an automated game bot in Python on OS X 10.8.2 and in the process of researching Python GUI automation I discovered autopy. The mouse manipulation API is great, but it seems that the screen capture methods rely on deprecated OpenGL methods...

是否有任何有效的方法可以在 OS X 中获取像素的颜色值?我现在能想到的唯一方法是使用 os.system("screencapture foo.png") 但该过程似乎有不必要的开销,因为我将非常快速地进行轮询.

Are there any efficient ways of getting the color value of a pixel in OS X? The only way I can think of now is to use os.system("screencapture foo.png") but the process seems to have unneeded overhead as I'll be polling very quickly.

推荐答案

一个小小的改进,但对 screencapture 使用 TIFF 压缩选项要快一些:

A small improvement, but using the TIFF compression option for screencapture is a bit quicker:

$ time screencapture -t png /tmp/test.png
real        0m0.235s
user        0m0.191s
sys         0m0.016s
$ time screencapture -t tiff /tmp/test.tiff
real        0m0.079s
user        0m0.028s
sys         0m0.026s

正如您所说,这确实有很多开销(子进程创建、从磁盘写入/读取、压缩/解压缩).

This does have a lot of overhead, as you say (the subprocess creation, writing/reading from disc, compressing/decompressing).

相反,您可以使用 PyObjC 使用 CGWindowListCreateImage 来捕获屏幕.我发现捕获一个 1680x1050 像素的屏幕大约需要 70 毫秒(~14 帧/秒),并且可以在内存中访问这些值

Instead, you could use PyObjC to capture the screen using CGWindowListCreateImage. I found it took about 70ms (~14fps) to capture a 1680x1050 pixel screen, and have the values accessible in memory

一些随机笔记:

  • 导入Quartz.CoreGraphics 模块是最慢的部分,大约1 秒.导入大部分 PyObjC 模块也是如此.在这种情况下不太重要,但对于短期流程,您最好在 ObjC 中编写该工具
  • 指定较小的区域会更快一些,但不是很大(100x100px 块约 40 毫秒,1680x1050 块约 70 毫秒).大部分时间似乎都花在了 CGDataProviderCopyData 调用上——我想知道有没有办法直接访问数据,因为我们不需要修改它?
  • ScreenPixel.pixel 函数非常快,但访问大量像素仍然很慢(因为 0.01ms * 1650*1050 大约为 17 秒) - 如果您需要访问大量像素,可能会更快地struct.unpack_from 一次性访问它们.
  • Importing the Quartz.CoreGraphics module is the slowest part, about 1 second. Same is true for importing most of the PyObjC modules. Unlikely to matter in this case, but for short-lived processes you might be better writing the tool in ObjC
  • Specifying a smaller region is a bit quicker, but not hugely (~40ms for a 100x100px block, ~70ms for 1680x1050). Most of the time seems to be spent in just the CGDataProviderCopyData call - I wonder if there's a way to access the data directly, since we dont need to modify it?
  • The ScreenPixel.pixel function is pretty quick, but accessing large numbers of pixels is still slow (since 0.01ms * 1650*1050 is about 17 seconds) - if you need to access lots of pixels, probably quicker to struct.unpack_from them all in one go.

代码如下:

import time
import struct

import Quartz.CoreGraphics as CG


class ScreenPixel(object):
    """Captures the screen using CoreGraphics, and provides access to
    the pixel values.
    """

    def capture(self, region = None):
        """region should be a CGRect, something like:

        >>> import Quartz.CoreGraphics as CG
        >>> region = CG.CGRectMake(0, 0, 100, 100)
        >>> sp = ScreenPixel()
        >>> sp.capture(region=region)

        The default region is CG.CGRectInfinite (captures the full screen)
        """

        if region is None:
            region = CG.CGRectInfinite
        else:
            # TODO: Odd widths cause the image to warp. This is likely
            # caused by offset calculation in ScreenPixel.pixel, and
            # could could modified to allow odd-widths
            if region.size.width % 2 > 0:
                emsg = "Capture region width should be even (was %s)" % (
                    region.size.width)
                raise ValueError(emsg)

        # Create screenshot as CGImage
        image = CG.CGWindowListCreateImage(
            region,
            CG.kCGWindowListOptionOnScreenOnly,
            CG.kCGNullWindowID,
            CG.kCGWindowImageDefault)

        # Intermediate step, get pixel data as CGDataProvider
        prov = CG.CGImageGetDataProvider(image)

        # Copy data out of CGDataProvider, becomes string of bytes
        self._data = CG.CGDataProviderCopyData(prov)

        # Get width/height of image
        self.width = CG.CGImageGetWidth(image)
        self.height = CG.CGImageGetHeight(image)

    def pixel(self, x, y):
        """Get pixel value at given (x,y) screen coordinates

        Must call capture first.
        """

        # Pixel data is unsigned char (8bit unsigned integer),
        # and there are for (blue,green,red,alpha)
        data_format = "BBBB"

        # Calculate offset, based on
        # http://www.markj.net/iphone-uiimage-pixel-color/
        offset = 4 * ((self.width*int(round(y))) + int(round(x)))

        # Unpack data from string into Python'y integers
        b, g, r, a = struct.unpack_from(data_format, self._data, offset=offset)

        # Return BGRA as RGBA
        return (r, g, b, a)


if __name__ == '__main__':
    # Timer helper-function
    import contextlib

    @contextlib.contextmanager
    def timer(msg):
        start = time.time()
        yield
        end = time.time()
        print "%s: %.02fms" % (msg, (end-start)*1000)


    # Example usage
    sp = ScreenPixel()

    with timer("Capture"):
        # Take screenshot (takes about 70ms for me)
        sp.capture()

    with timer("Query"):
        # Get pixel value (takes about 0.01ms)
        print sp.width, sp.height
        print sp.pixel(0, 0)


    # To verify screen-cap code is correct, save all pixels to PNG,
    # using http://the.taoofmac.com/space/projects/PNGCanvas

    from pngcanvas import PNGCanvas
    c = PNGCanvas(sp.width, sp.height)
    for x in range(sp.width):
        for y in range(sp.height):
            c.point(x, y, color = sp.pixel(x, y))

    with open("test.png", "wb") as f:
        f.write(c.dump())

这篇关于Python 在 OS X 中获取屏幕像素值的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆