在放大图像时控制平移(以锚定点) [英] Controlling the pan (to anchor a point) when zooming into an image

查看:197
本文介绍了在放大图像时控制平移(以锚定点)的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我正在编写一个简单的图像查看器,并且正在实现平移和缩放功能(分别使用鼠标拖动和鼠标滚轮滚动)。我已经成功实现了平移(简易模式)和一个天真的'进入左上角'缩放。

我现在想要优化缩放,使得坐标为当缩放成为焦点时,用户的鼠标:即,当缩放时,平移被更新,以便用户鼠标下的(图像的)像素保持不变(这样它们才真正缩放到那个区域)



通过覆盖普通QWidget上的paintEvent来查看图像。

尝试直观的方法,我似乎无法实现正确的缩放行为。



属性 scale 表示当前的缩放级别(比例为2意味着图像被视为双倍它的真实大小,0.5表示一半,比例> 0),位置是左上角的坐标当前查看的图像区域(通过平移)。



以下是实际图像显示的执行方式

  def paintEvent(self,event):
painter = QtGui.QPainter()
painter.begin(self)

painter.drawImage(0,0,
self.image.scaled(
self.image.width()* self.scale,
self.image.height() * self.scale,
QtCore.Qt.KeepAspectRatio),
self.position [0],self.position [1])$ ​​b
$ b painter.end()

这是平移代码(相对简单):

完全用于平移,并参考鼠标初始按位置当时的图像查看位置(分别))

  def mousePressEvent(self,event):
self.pressed = event.pos()
self.anchor = self.position

def mouseReleaseEvent(self,event):
self.pressed = None

def mouseMoveEvent(self,event):
if(self.pressed):
dx,d y = event.x() - self.pressed.x(),event.y() - self.pressed.y()
self.position =(self.anchor [0] - dx,self.anchor [1] - dy)
self.repaint()

这是缩放代码没有尝试调整平底锅。它导致所有东西从屏幕的左上角缩小或增长

  def wheelEvent(self,event): 
oldscale = self.scale
self.scale + = event.delta()/ 1200.0
if(self.scale< 0.1):
self.scale = oldscale
self.repaint()

这是缩放代码,用于保存平移(锚点) )可见区域的左上角。放大时,屏幕左上角的像素不会改变。

  def wheelEvent(self,event):
oldscale = self.scale
self.scale + = event.delta()/ 1200
if(self.scale< 0.1):
self.scale = oldscale

self.position =(self.position [0] *(self.scale / oldscale),
self.position [1] *(self.scale / oldscale))
self。 repaint()

我想要上述效果,但是对于锚定点滚动时在用户的鼠标上。这是我的尝试,它的工作非常轻微:缩放仍然不是我想要的,但滚动到鼠标的一般区域,没有锚定。实际上,将鼠标保持在相同位置并放大似乎遵循弯曲路径,向右平移然后向左平移。

  def wheelEvent(self,event):
oldscale = self.scale
self.scale + = event.delta()/ 1200.0
if(self.scale< 0.1):
self.scale = oldscale

oldpoint = self.mapFromGlobal(QtGui.QCursor.pos())
dx,dy = oldpoint.x() - self.position [0],oldpoint .y() - self.position [1]
newpoint =(oldpoint.x()*(self.scale / oldscale),
oldpoint.y()*(self.scale / oldscale))
self.position =(newpoint [0] - dx,newpoint [1] - dy)



<这背后的理论是,在缩放之前,鼠标左下角的像素是左上角的 dx dy 位置的)。在缩放之后,我们通过将 self.position 调整为 dx 和 dy 像素的西边和北边。



我不完全确定我哪里出错了:我怀疑是映射旧点进入我的屏幕坐标是不合时宜的,或更可能:我的数学错误,因为我混淆了像素和屏幕坐标。

我已经尝试了一些直观的变化,没有任何东西接近预期的锚定。



我认为这对于文件查看器来说是一个非常常见的任务(因为大多数似乎都是这样放大的),但我发现研究这些算法非常困难。 / p>

以下是完整代码(需要PyQt4)修改缩放:

http://pastebin.com/vvpdZy9g



感谢任何帮助!

解决方案

好的,我设法让它运转

  def wheelEvent(self ,事件):
oldscale = self.scale
self.scale + = event.delta()/ 1200.0
if(self.scale< 0.1):
self.scale = oldscale

screenpoint = self.mapFromGlobal(QtGui.QCursor.pos())
dx,dy = screenpoint.x(),screenpoint.y()
oldpoint =( screenpoint.x()+ self.position [0],screenpoint.y()+ self.position [1])$ ​​b $ b newpoint =(oldpoint [0] *(self.scale / oldscale),
oldpoint [1] *(self.sc ale / oldscale))
self.position =(newpoint [0] - dx,newpoint [1] - dy)

这里的逻辑:




  • 我们在屏幕上显示鼠标位置( screenpoint ),并使用它,我们的锚定像素和屏幕边缘之间的距离(按照定义)

  • 我们使用 screenpoint position 根据图像的平面(即:悬停像素的2D索引)查找鼠标的坐标,如 oldpoint

  • 应用我们的缩放,我们计算像素的新2D索引(新点)

  • 我们想在屏幕上显示这个像素,但是不在左上角:我们想要从左上角的 dx dy 位置



这个问题在图像和显示坐标之间确实是一个微不足道的混淆。


I'm writing a simple image viewer and am implementing a pan and zoom feature (using mouse dragging and mouse wheel scrolling respectively). I've successfully implemented the pan (easy mode) and a naive 'into top left corner' zoom.
I'd now like to refine the zoom such that the coordinate of the user's mouse when zooming becomes the 'focal point': that is, when zooming, the pan is updated so that the pixel (of the image) under the user's mouse stays the same (so that they're really zooming into that area)

The image is viewed by overriding the paintEvent on an otherwise plain QWidget.
Try as I might with intuitive approaches, I can not seem to achieve the correct zoom behaviour.

An attribute scale represents the current level of zoom (a scale of 2 implies the image is viewed double it's true size, 0.5 implies half, and scale > 0), and position is the coordinate for the top-left corner of the image region currently viewed (via panning).

Here's how the actual image display is performed

def paintEvent(self, event):
    painter = QtGui.QPainter()
    painter.begin(self)

    painter.drawImage(0, 0,
        self.image.scaled(
            self.image.width() * self.scale,
            self.image.height() * self.scale,
            QtCore.Qt.KeepAspectRatio),
        self.position[0], self.position[1])

    painter.end()

Here is the panning code (relatively simple):
(pressed and anchor are used entirely for panning, and refer to the position of the initial mouse press and image view position at that time (respectively))

def mousePressEvent(self, event):
    self.pressed = event.pos()
    self.anchor = self.position

def mouseReleaseEvent(self, event):
    self.pressed = None

def mouseMoveEvent(self, event):
    if (self.pressed):
        dx, dy = event.x() - self.pressed.x(), event.y() - self.pressed.y()
        self.position = (self.anchor[0] - dx, self.anchor[1] - dy)
    self.repaint()

Here is the zooming code without attempting to adjust the pan. It results in everything shrinking or growing from / to the top-left corner of the screen

def wheelEvent(self, event):
    oldscale = self.scale
    self.scale += event.delta() / 1200.0
    if (self.scale < 0.1):
        self.scale = oldscale
    self.repaint()

Here is the zooming code with panning to preserve (anchor) the top left corner of the visible region. When you zoom in, the top-left pixel on the screen will not change.

def wheelEvent(self, event):
    oldscale = self.scale
    self.scale += event.delta() / 1200
    if (self.scale < 0.1):
        self.scale = oldscale

    self.position = (self.position[0] * (self.scale / oldscale),
                     self.position[1] * (self.scale / oldscale))        
    self.repaint()

I want the above effect, but for the anchored point to be at the user's mouse when scrolling. Here is my attempt, which works very slightly: the zooming is still not as I intended, but scrolls into the general region of the mouse, without anchoring. In fact, keeping the mouse in the same position and zooming in seems to follow a curved path, panning right then panning left.

def wheelEvent(self, event):
    oldscale = self.scale
    self.scale += event.delta() / 1200.0
    if (self.scale < 0.1):
        self.scale = oldscale

    oldpoint = self.mapFromGlobal(QtGui.QCursor.pos())
    dx, dy = oldpoint.x() - self.position[0], oldpoint.y() - self.position[1]
    newpoint = (oldpoint.x() * (self.scale/oldscale),
                oldpoint.y() * (self.scale/oldscale))
    self.position = (newpoint[0] - dx, newpoint[1] - dy)

The theory behind this is that before the zoom, the pixel 'under' the mouse is length dx and dy from the top-left corner (position). After the zoom, we calculate the new position of this pixel and force it under the same coordinate on the screen by adjusting our self.position to be dx and dy west and north of the pixel.

I'm not entirely sure where I'm going wrong: I suspect that the mapping of old point into my screen coordinates is somehow off, or more likely: my mathematics is wrong because I've confused pixel and screen coordinates.
I've tried a few intuitive variations and nothing comes close to the intended anchoring.

I imagine this is quite a common task for file viewers (since most seem to zoom like this), yet I'm finding it quite difficult to research the algorithms.

Here's the full code (requires PyQt4) to tinker with the zooms:
http://pastebin.com/vvpdZy9g

Any help is appreciated!

解决方案

Ok, I managed to get it working

def wheelEvent(self, event):
    oldscale = self.scale
    self.scale += event.delta() / 1200.0
    if (self.scale < 0.1):
        self.scale = oldscale

    screenpoint = self.mapFromGlobal(QtGui.QCursor.pos())
    dx, dy = screenpoint.x(), screenpoint.y()
    oldpoint = (screenpoint.x() + self.position[0], screenpoint.y() + self.position[1])
    newpoint = (oldpoint[0] * (self.scale/oldscale),
                oldpoint[1] * (self.scale/oldscale))
    self.position = (newpoint[0] - dx, newpoint[1] - dy)

the logic here:

  • we get the mouses position on the screen (screenpoint), and using this, we have the distance between our anchored pixel and the edge of the screen (by definition)
  • we use screenpoint and position to find the coordinate of the mouse in terms of the image's plane (ie: the 2D index of the hovered pixel), as oldpoint
  • applying our scaling, we calculate the new 2D index of the pixel (new point)
  • we want this pixel on our screen, but not in the top left: we want it dx and dy from the top left (position)

The problem was indeed a trivial confusion between image and display coordinates.

这篇关于在放大图像时控制平移(以锚定点)的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆