与Android的矩阵图像变换,翻译触摸坐标回 [英] Android image transformation with matrix, translate touch coordinates back

查看:274
本文介绍了与Android的矩阵图像变换,翻译触摸坐标回的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我建立一个导航式的应用程序为Android。

有关的导航部分,我使用的触摸事件正在建立一个活动,用户可以在移动和缩放地图(其是一个位图),并且还地图绕使用罗盘屏幕的中心。

我使用的是矩阵扩展,转和旋转图像,而且比我失败中汲取到画布上。

下面是code呼吁观点装载,居中在屏幕上的图像:

 图像=新的Matrix();
    image.setScale(缩放,缩放);    image_center =新的PointF(bmp.getWidth()/ 2,bmp.getHeight()/ 2);    浮centerScaledWidth = image_center.x *变焦;
    浮centerScaledHeigth = image_center.y *变焦;    image.postTranslate(
            screen_center.x - centerScaledWidth,
            screen_center.y - centerScaledHeigth);

图像的旋转用做的 postRotate 的方法

然后在的onDraw()的方法,我只能叫

  canvas.drawBitmap(BMP,图像,drawPaint);

问题在于,当用户触摸屏幕时,我想获得触动了图片上的点,但显然我不能得到正确的位置。
我试图反转的图片矩阵和翻译触摸点,这是行不通的。

难道有人知道如何点坐标转换?

修改

我使用这个code的翻译中。
DX 的和的 DY 的是平移的值从 onTouch 的听众得到。
* new_center *是浮点值的这种形式的数组{X0,Y0,X1,Y1 ...}

 矩阵翻译=新的Matrix();
  矩阵反转=新的Matrix();  translated.set(图片);
  translated.postTranslate(DX,DY);  translated.invert(倒);
  inverted.mapPoints(new_center);
  translated.mapPoints(new_center);  Log.i(new_center,new_center [0] ++ new_center [1]);

其实我尝试使用为* new_center = {0,0} *:

appling仅翻译的矩阵时,得到作为espected的BMP和屏幕(0,0)点的(0,0)点之间的距离,但似乎不考虑到轮换。

Appling的的颠倒的矩阵来点,我得到这些结果,百般移动图像。

  13 12-26:26:08.481:I / new_center(11537):1.9073486E-6 -1.4901161E-7
  12-26 13:26:08.581:I / new_center(11537):0.0 -3.874302E-7
  12-26 13:26:08.631:I / new_center(11537):1.9073486E-6 1.2516975E-6
  12-26 13:26:08.781:I / new_center(11537):-1.9073486E-6 -5.364418E-7
  12-26 13:26:08.951:I / new_center(11537):0.0 2.682209E-7
  12-26 13:26:09.093:I / new_center(11537):0.0 7.003546E-7

相反,我被especting翻译图象上的坐标。

它是纠正我的想法行?


解决方案

好吧,我明白了。

首先,我从翻译分离旋转和图像缩放的

由于我创建一个自定义的ImageView,这很简单。我应用旋转到的ImageView的画布的,而另一个转换该图像的矩阵

我得到的canva的矩阵throught一个全局变量矩阵的痕迹。

有些code:

要设置正确的运动对相应的 onTouch 的事件,首先我旋转回从 onTouch 的传递点(启动使用的画布矩阵的逆停止的点)

然后我计算出X和Y之间的差异,并应用到的图片的矩阵。

 浮法[] =运动{start.x,start.y,stop.x,stop.y};  矩阵C_T =新的Matrix();
  canvas.invert(C_T);
  c_t.mapPoints(运动);  浮DX =运动[2] - 运动[0];
  浮DY =移动[3] - 运动[1];  image.postTranslate(DX,DY);

如果不是你要检查图像移动不超过其规模的的 image.postTranslate之前(DX,DY); 的你把这个code:

 浮动[] = new_center {screen_center.x,screen_center.y};  矩阵副本=新的Matrix();
  copy.set(图片);
  copy.postTranslate(DX,DY);  矩阵翻译=新的Matrix();
  copy.invert(翻译);
  translated.mapPoints(new_center);  如果((new_center [0]大于0)及及(new_center [0]&下; bmp.getWidth())及&放大器;
    (new_center [1]大于0)及&放大器; (new_center [1]; bmp.getHeight())){        //你可以删除image.postTranslate并复制复制,而不是矩阵
        image.set(复印件);
  ...

要注意,这是很重要的:

A)中的图像的中心旋转是在屏幕的中心,所以在画布'旋转期间不会改变的坐标

B)可以使用屏幕的中心的坐标来获取图像的旋转中心

通过这种方法你也可以每个触摸事件转换为图像坐标。

I'm building a "navigation type" app for android.

For the navigation part I'm building an Activity where the user can move and zoom the map (which is a bitmap) using touch events, and also the map rotate around the center of the screen using the compass.

I'm using Matrix to scale, transpose and rotate the image, and than I draw it to the canvas.

Here is the code called on loading of the view, to center the image in the screen:

    image = new Matrix();
    image.setScale(zoom, zoom);

    image_center = new PointF(bmp.getWidth() / 2, bmp.getHeight() / 2);

    float centerScaledWidth = image_center.x * zoom;
    float centerScaledHeigth = image_center.y * zoom;

    image.postTranslate(
            screen_center.x -  centerScaledWidth, 
            screen_center.y - centerScaledHeigth);

The rotation of the image is doing using the postRotate method.

Then in the onDraw() method I only call

  canvas.drawBitmap(bmp, image, drawPaint);

The problem is that, when the user touch the screen, I want to get the point touched on the image, but apparently I can't get the correct position. I tried to invert the image matrix and translate the touched points, it isn't working.

Do somebody know how to translate the point coordinates?

EDIT

I'm using this code for traslation. dx and dy are translation values get from the onTouch listener. *new_center* is an array of float values in this form {x0, y0, x1, y1...}

  Matrix translated = new Matrix();
  Matrix inverted = new Matrix();

  translated.set(image);
  translated.postTranslate(dx, dy);

  translated.invert(inverted);
  inverted.mapPoints(new_center);
  translated.mapPoints(new_center);

  Log.i("new_center", new_center[0]+" "+new_center[1]);

Actually I tried using as *new_center = {0,0}*:

appling only the translated matrix, I get as espected the distance between the (0,0) point of the bmp and the (0,0) point of the screen, but it seems to not take account of the rotation.

Appling the inverted matrix to the points I get those results, moving the image in every possible way.

  12-26 13:26:08.481: I/new_center(11537): 1.9073486E-6 -1.4901161E-7
  12-26 13:26:08.581: I/new_center(11537): 0.0 -3.874302E-7
  12-26 13:26:08.631: I/new_center(11537): 1.9073486E-6 1.2516975E-6
  12-26 13:26:08.781: I/new_center(11537): -1.9073486E-6 -5.364418E-7
  12-26 13:26:08.951: I/new_center(11537): 0.0 2.682209E-7
  12-26 13:26:09.093: I/new_center(11537): 0.0 7.003546E-7

Instead I was especting the coordinates translated on the image.

Is it correct my line of thoughts?

解决方案

Ok, I get it.

First I separated the rotation from the translation and zooming of image.

Because I created a custom ImageView, this was simple. I apply the rotation to the canvas of the ImageView, and the other transformations to the matrix of the image.

I get trace of the canva's matrix throught a global matrix variable.

Some code:

To set the correct movement for the corresponding onTouch event, first I "rotate back" the points passed from onTouch (start and stop points) using the inverse of the matrix of the canvas

Then I calculate the difference between x and y, and apply that to the image matrix.

  float[] movement = {start.x, start.y, stop.x, stop.y};

  Matrix c_t = new Matrix();
  canvas.invert(c_t);
  c_t.mapPoints(movement);

  float dx = movement[2] - movement[0];
  float dy = movement[3] - movement[1];

  image.postTranslate(dx, dy);

If instead you want to check that the image movement don't exceed its size, before the image.postTranslate(dx, dy); you put this code:

  float[] new_center = {screen_center.x, screen_center.y};

  Matrix copy = new Matrix();
  copy.set(image);
  copy.postTranslate(dx, dy);

  Matrix translated = new Matrix();
  copy.invert(translated);
  translated.mapPoints(new_center);

  if ((new_center[0] > 0) && (new_center[0] < bmp.getWidth()) && 
    (new_center[1] > 0) && (new_center[1] < bmp.getHeight())) {

        // you can remove the image.postTranslate and copy the "copy" matrix instead
        image.set(copy);
  ...

It's important to note that:

A) The center rotation of the image is the center of the screen, so it will not change coordinates during the canvas' rotation

B) You can use the coordinates of the center of the screen to get the rotation center of the image.

With this method you can also convert every touch event to image coordinates.

这篇关于与Android的矩阵图像变换,翻译触摸坐标回的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆