Matrix.postScale(sx, sy, px, py) 如何工作? [英] How does Matrix.postScale( sx, sy, px, py) work?
问题描述
首先阅读Taig 的问题
泰格说:
<块引用>当调用 Matrix.postScale( sx, sy, px, py );矩阵得到缩放和平移(取决于给定的点 x, y).那预定此方法用于放大图像,因为我可以轻松地专注于一个特定的点.android 文档描述了这样的方法:
对具有指定比例的矩阵进行后连接.M' = S(sx, sy, px, py) * M
乍一看这似乎很荒谬,因为 M 应该是一个3x3 矩阵.挖掘周围我发现android使用4x4-Matrix 用于其计算(同时仅在其 API 上提供 3x3).由于此代码是用 C 编写的,因此我很难尝试了解实际发生的事情.
我在 Wolfram
看到了视觉转换我的问题和Taig一样
<块引用>我真正想知道的是:我如何应用这种缩放(带有焦点)到我可以在我的内部访问的 3x3 矩阵Java代码?
谁能给我一个 10 岁的孩子会理解的示例和带有 4 个参数(sx、sy、px、py)的 2d 缩放公式?
更仔细地查看 Matrix 方法.您将看到 getValue()
和 setValue()
.文档说他们使用带有 9 个值的 float
数组.还有一堆常量:MSCALE_X
、MSCALE_Y
、MTRANS_X
、MTRANS_Y
等等.这些常量是float[9]
数组的索引.
由于我们只在二维中工作,矩阵实际上是一个 2x2 矩阵.但是因为这个矩阵支持仿射变换,矩阵被扩展为一个 3x3 矩阵.3x3 = 9,对应于 float[9]
数组.也就是说,本质上,您的 3x3 矩阵.
Matrix
的实际内容是用 C++ 编写的,并通过 JNI 访问,因为操作必须快快快快.他们甚至使用针对计算速度进行了优化的特殊非标准浮点数格式(16.16").
我不知道您从哪里获得有关 4x4 阵列的信息.这是来自 C++ JNI 的代码片段:
SkScalar fMat[9];可变 uint32_t fTypeMask;void setScaleTranslate(SkScalar sx, SkScalar sy, SkScalar tx, SkScalar ty) {fMat[kMScaleX] = sx;fMat[kMSkewX] = 0;fMat[kMTransX] = tx;fMat[kMSkewY] = 0;fMat[kMScaleY] = sy;fMat[kMTransY] = ty;fMat[kMPersp0] = 0;fMat[kMPersp1] = 0;fMat[kMPersp2] = 1;无符号掩码 = 0;如果 (sx != 1 || sy != 1) {掩码 |= kScale_Mask;}如果 (tx || ty) {掩码 |= kTranslate_Mask;}this->setTypeMask(mask | kRectStaysRect_Mask);}
这是一个用于仿射变换的 3x3 矩阵.
当您调用 matrix.postScale()
时,您正在修改 scaleX、scaleY、transX 和 transY.(pre..()
和 post...()
方法保留矩阵中的任何变换开始.)Matrix
像这样应用新的变换:
那是整个矩阵乘法的简化版本.如果我有一个带有点 (2,2) 的图形并将其缩放 2 倍,则新点将是 (4,4).要沿 X 或 Y 轴移动,我只需添加一个常量.
因为 Matrix.postScale()
实际上是取焦点,所以该方法内部调整 transX &transY 就好像您正在翻译,缩放,然后翻译回来一样.这使得缩放看起来好像扩展/收缩以点 px, py 为中心.
所以对于焦点,我通过将 px 和 py 直接添加到原始 x,y 值来将图形移动到 (px,py).然后我进行缩放.但是要撤消转换,我必须考虑到我原来的焦点现在自己缩放了,所以我必须减去 scaleX * px 和 scaleY * py,而不是减去 px 和 py.
Skew 或 Shear 类似于缩放,但具有相反的轴:
<前>X' = Y * skewXY' = X * skewY由于您在不变形的情况下进行缩放和平移,因此 skewX 和 skewY 设置为零.所以它们仍然用于矩阵乘法,它们只是不影响最终结果.
旋转是通过添加一个小三角来完成的:
<前>theta = 旋转角度scaleX = cos(theta)skewX = -sin(theta)skewY = 罪(θ)scaleY = cos(theta)然后是 android.graphics.Camera
(与 android.hardware.Camera
相反),它可以采用 2D 平面并在 3D 空间中旋转/平移它.这是MPERSP_0
、MPERSP_1
、&MPERSP_2
发挥作用.我不是在做那些方程式;我是程序员,不是数学家.
但我不需要成为数学家.我什至不需要知道 Matrix
是如何计算的.我一直在研究支持捏合/缩放的 ImageView 子类.所以我使用 ScaleGestureDetector
来告诉我用户何时缩放.它有方法 getScaleFactor()
、getFocusX()
和 getFocusY()
.我将这些值插入 matrix.postScale()
,并且我的 ImageView
的比例类型设置为 MATRIX
,我调用 ImageView.setImageMatrix()
与我的缩放矩阵.瞧,根据用户的手势,图像会按照用户希望看到的方式进行缩放.
所以我不明白关于探索 Matrix
在幕后如何工作的所有焦虑.不过,我希望我在这里写的东西能给你你正在寻找的答案.
First read Taig's question
Taig said:
When calling Matrix.postScale( sx, sy, px, py ); the matrix gets scaled and also translated (depending on the given point x, y). That predestines this method to be used for zooming into images because I can easily focus one specific point. The android doc describes the method like this:
Postconcats the matrix with the specified scale. M' = S(sx, sy, px, py) * M
At a first glance this seems ridiculous because M is supposed to be a 3x3-Matrix. Digging around I've found out that android uses a 4x4-Matrix for its computations (while only providing 3x3 on its API). Since this code is written in C I'm having a hard time trying to understand what is actually happening.
I saw the visual transform at Wolfram
My question is same as Taig
What I actually want to know: How can I apply this kind of scaling (with a focused point) to the 3x3 Matrix that I can access within my Java-code?
Who can give me a example and 2d-scaling formula with 4 parameters (sx, sy, px, py) that a 10-year-old-kid would understand?
Look more closely at the Matrix methods. You will see getValue()
and setValue()
. The docs say they work with a float
array with 9 values. There are also a bunch of constants: MSCALE_X
, MSCALE_Y
, MTRANS_X
, MTRANS_Y
etc. etc. Those constants are indices into the float[9]
array.
Since we are only working in 2 dimensions, the matrix would actually be a 2x2 matrix. But because this matrix supports affine transforms, the matrix is extended to become a 3x3 matrix. 3x3 = 9, which corresponds to the float[9]
array. That is, essentially, your 3x3 matrix.
The actual guts of Matrix
are written in C++ and accessed through JNI because the operations have to be fast fast fast fast fast. They even use a special non-standard floating point number format ("16.16") that is optimized for calculation speed.
I don't know where you are getting the information about a 4x4 array. Here's a code snippet from the C++ JNI:
SkScalar fMat[9];
mutable uint32_t fTypeMask;
void setScaleTranslate(SkScalar sx, SkScalar sy, SkScalar tx, SkScalar ty) {
fMat[kMScaleX] = sx;
fMat[kMSkewX] = 0;
fMat[kMTransX] = tx;
fMat[kMSkewY] = 0;
fMat[kMScaleY] = sy;
fMat[kMTransY] = ty;
fMat[kMPersp0] = 0;
fMat[kMPersp1] = 0;
fMat[kMPersp2] = 1;
unsigned mask = 0;
if (sx != 1 || sy != 1) {
mask |= kScale_Mask;
}
if (tx || ty) {
mask |= kTranslate_Mask;
}
this->setTypeMask(mask | kRectStaysRect_Mask);
}
It's a 3x3 matrix for an affine transform.
When you call matrix.postScale()
, you are modifying scaleX, scaleY, transX, and transY. (The pre..()
and post...()
methods preserve any transform that was in your matrix to start with.) The Matrix
applies the new transform like this:
X' = X * scaleX + transX Y' = Y * scaleY + transY
That's the simplified version of the entire matrix multiplication. If I have a figure with point (2,2) and I scale it 2x, the new point will be (4,4). To move along the X or Y axis, I just add a constant.
Because Matrix.postScale()
actually takes a focus point, the method internally adjusts transX & transY as though you are translating in, scaling, then translating back out. This makes the scaling appear as though the expansion/shrinking is centered around a point px, py.
transX = (1 - scaleX) * px transY = (1 - scaleY) * py
So for the focus point, I move the figure to (px,py) by adding px and py directly to the original x,y values. Then I do the scaling. But to undo the translation, I have to take into account that my original focus point is now scaled itself, so instead of subtracting px and py, I have to subtract scaleX * px and scaleY * py.
Skew or Shear is like scaling but with opposing axes:
X' = Y * skewX Y' = X * skewY
Since you're scaling and translating without warping, skewX and skewY are set to zero. So they're still used in the matrix multiplication, they just don't affect the final outcome.
Rotation is done by adding in a little trig:
theta = angle of rotation scaleX = cos(theta) skewX = -sin(theta) skewY = sin(theta) scaleY = cos(theta)
Then there is the android.graphics.Camera
(as opposed to android.hardware.Camera
) that can take a 2D plane and rotate/translate it in 3D space. This is where MPERSP_0
, MPERSP_1
, & MPERSP_2
come into play. I'm not doing those equations; I'm a programmer, not a mathematician.
But I don't need to be a mathematician. I don't even need to know how Matrix
does its calculations. I have been working on an ImageView subclass that supports pinch/zoom. So I use a ScaleGestureDetector
to tell me when the user is zooming. It has methods getScaleFactor()
, getFocusX()
and getFocusY()
. I plug those values into matrix.postScale()
, and with my ImageView
having a scale type set to MATRIX
, I call ImageView.setImageMatrix()
with my scaled matrix. Voilà, the image zooms exactly the way the user expects to see it based on their gestures.
So I don't understand all the angst about grokking how Matrix
works under the hood. Still, I hope something I wrote here gives you the answers you are looking for.
这篇关于Matrix.postScale(sx, sy, px, py) 如何工作?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!