根据与相机的距离缩放AR图片 [英] Scaling AR pictures based on the distance from the camera

查看:254
本文介绍了根据与相机的距离缩放AR图片的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我正在开发增强现实 iPhone应用程式。

I’m developing an augmented reality iPhone app.

它应该基本上是显示分配给地理位置的照片,当你看着他们的相机。每个这样的图片可以被理解为具有其地理位置和方向(被理解为其平面和北方向轴线之间的角度)的广告牌。

What it should basically do is display pictures assigned to geographical locations when you look at them by the camera. Each such picture may be understood as a billboard which has its geographical position and a heading (understood as an angle between its plane and the north direction axis).

目标是让这些广告牌或多或少显示为物理对象。他们应该更大,如果你靠近他们,更小,当更远。当你不直接站在他们面前时,他们也应该以正确的视角出现。

The goal is to make these billboards display more or less like they were physical objects. They should be larger if you are close to them and smaller when farther. They should as well appear in a proper perspective when you don’t stand directly in front of them.

我想我已经或多或少地实现了这个目标。 通过测量从iPhone到图片的初始标题,我可以决定相机查看的图片的旋转角度(以正确的视角查看它们)。

I think I have achieved that goal more or less. By measuring the initial heading from the iPhone to a picture I can decide about the rotation angle of the pictures as viewed by the camera (to see them in a proper perspective).

但是,如果要根据距离手机的距离来缩放它们,我想我搞砸了我的方法。我假设最大视野距离是,例如,200米。然后距离手机100米的广告牌以原始大小的50%显示。而已。基于最大距离的线性缩放。

However, if it comes to scaling them based on the distance from the phone, I think I screwed my approach. I made an assumption that the maximum view distanse is, let’s say, 200 m. Then billboards being 100 m from the phone are displayed in 50% of their original size. That’s it. A linear scaling based on the maximum distance.

这种方法错过的是广告牌的大小(理解为物理对象)。它们在屏幕上显示的方式取决于它们的大小(以像素为单位)。这意味着显示器的分辨率是决定如何感知它们的因素。所以我假设,如果你得到两个手机具有相同的屏幕尺寸,但不同的分辨率,相同的图片将是不同的大小在他们两个。我是对吗?

最后,我的问题是如何接近缩放图片, ?

Then finally, my question is how to approach scaling pictures to make them look good on the AR view?

我想我应该考虑一些相机参数。当一个10x10厘米的物体只是在相机的前面,它可能会覆盖整个屏幕。但是当你把它放几米远,它变成一个小细节。那么如何接近缩放呢?如果我决定为我的虚拟广告牌分配物理尺寸,那么如何根据距相机的距离来缩放它们?

I think I should take some camera parameters into consideration. When a 10x10 cm object is just in front of the camera it may cover the whole screen. But when you put it a few metres farther, it becomes a minor detail. Then how to approach scaling? If I decide to assign physical dimensions to my virtual billboards, then how to scale them based on the distance from the camera?

我应该分配每个图片的物理尺寸(无论它们的尺寸是多少像素),并根据尺寸和某些与相机相关的缩放因子显示它们

你能帮我一下吗?任何线索都会有所帮助。谢谢!

Could you please help me on that? Any clues will be helpful. Thank you!

推荐答案

我想我设法解决我的问题。让我解释一下我是如何做的,如果可能对别人有用。

I think I managed to solve my problem. Let me explain how I did it for if might be of use to others. If you find this approach wrong, I will be grateful for your feedback or any further clues.

如果您认为这种做法有误,我们将非常感谢您的反馈或任何进一步的线索。以米为单位显示在我的虚拟广告牌上。 此讨论帮助我找出了iPhone 4相机的参数:焦距和CCD尺寸。此外,这些值也帮助我为我的AR应用程序计算适当的FOV(请参阅计算相机的视角)。

I decided to assign physical dimensions in metres to my virtual billboards. This discussion helped me find out the parameters of the iPhone 4 camera: focal length and the dimensions of the CCD sesnor. What's more, these values also helped me calculate a proper FOV for my AR app (see Calculating a camera's angle of view).

此网站帮助我计算在CCD传感器上产生的物理图像的尺寸(以毫米为单位)。因此,如果我的广告牌有以米为单位的宽度和高度,并且它们距离相机的距离以及相机的焦距,我可以计算它们在传感器上的尺寸。

This website helped me calculate the size in millimeters of a physical object image produced on a CCD sensor. So if my billboards have width and height in metres and their distance from the camera is known as well as the focal length of the camera, I can calculate their size on the sensor.

(焦距*对象尺寸)/镜头到对象距离=图像尺寸(在传感器上)

(Focal Length * Object Dimension) / Lens-to-object distance = Image Size (on the sensor)


$ b b

double CalculatePhysicalObjectImageDimensionOnCCD(double cameraFocalLength_mm, double physicalObjectDimension_m, double distanceFromPhysicalObject_m)
{
    double physicalObjectDimension_mm = physicalObjectDimension_m * 1000;
    double distanceFromPhysicalObject_mm = distanceFromPhysicalObject_m * 1000;
    return (cameraFocalLength_mm * physicalObjectDimension_mm) / distanceFromPhysicalObject_mm;
}



我不知道这件事,所以我不知道我采取的时间是确定,但我只是决定计算iPhone屏幕与CCD传感器的尺寸相比有多大。因此,通过简单的数学运算,我得到了传感器到屏幕的尺寸比。因为传感器和屏幕的宽高比似乎不同,所以我用一种奇怪的方式计算了比率:

I have little knowledge on that matter so I’m not sure if the approach I took then is OK, but I just decided to calculate how much larger the iPhone screen is compared to the dimensions of the CCD sensor. So by a simple mathematical operation I get a sensor-to-screen size ratio. Because the width-to-height ratio of the sensor and the screen seem to be different, I calculated the ratio in a kind of cranky way:

double GetCCDToScreenSizeRatio(double sensorWidth, double sensorHeight, double screenWidth, double screenHeight)
{
    return sqrt(screenWidth * screenHeight) / sqrt(sensorWidth * sensorHeight);
}

然后,我得到的比率可以当作乘数。首先,我计算传感器上的虚拟广告牌的尺寸,然后乘以比率。这样我得到广告牌的实际大小(以像素为单位)。而已。所以当我调用下面的功能,只是通过提供我的广告牌的宽度和它的距离,它返回在屏幕上观看的广告牌的宽度的像素。

Then the ratio I get can be treated as a multiplier. First I calculate a dimension of my virtual billboard on the sensor and then multiply it by the ratio. This way I get the actual size of the billboard in pixels. That’s it. So when I call the function below just by providing the width of my billboard and the distance from it, it returns the width in pixels of the billboard as viewed on the screen. Same for the height of the billboard to get both dimensions.

const double CCD_DIM_LONGER_IPHONE4 = 4.592; //mm
const double CCD_DIM_SHORTER_IPHONE4 = 3.450; //mm
const double FOCAL_LENGTH_IPHONE4 = 4.28; //mm

double CalculatePhysicalObjectImageDimensionOnScreen_iPhone4(double physicalObjectDimension_m, double distanceFromPhysicalObject_m)
{
    double screenWidth = [UIScreen mainScreen].bounds.size.width;
    double screenHeight = [UIScreen mainScreen].bounds.size.height;

    return CalculatePhysicalObjectImageDimensionOnScreen(FOCAL_LENGTH_IPHONE4, physicalObjectDimension_m, distanceFromPhysicalObject_m, CCD_DIM_LONGER_IPHONE4, CCD_DIM_SHORTER_IPHONE4, screenWidth, screenHeight);
}

double CalculatePhysicalObjectImageDimensionOnScreen(double cameraFocalLength_mm, double physicalObjectDimension_m, double distanceFromPhysicalObject_m, double ccdSensorWidth, double ccdSensorHeight, double screenWidth, double screenHeight)
{
    double ccdToScreenSizeRatio = GetCCDToScreenSizeRatio(ccdSensorWidth, ccdSensorHeight, screenWidth, screenHeight);
    double dimensionOnCcd = CalculatePhysicalObjectImageDimensionOnCCD(cameraFocalLength_mm, physicalObjectDimension_m, distanceFromPhysicalObject_m);

    return dimensionOnCcd * ccdToScreenSizeRatio;
}

似乎它的工作完美比较我以前的愚蠢的线性缩放方法。我还注意到,顺便说一下,知道你的摄像机的FOV在注册AR视图上的虚拟对象是非常重要的。 这里是如何计算FOV基于CCD传感器尺寸和焦距。

It seems that it works perfect compared ty my previous, stupid approach of linear scaling. I also noticed, by the way, that it is really important to know the FOV for your camera when registering virtual objects on an AR view. Here’s how to calculate the FOV based on CCD sensor dimensions and the focal length.

在任何地方都难以找到这些值!我不知道为什么他们无法以编程方式(至少我的研究表明,他们不是)。看起来,有必要准备硬编码值,然后检查应用程序运行的设备的模型,以决定在执行上述所有计算时要选择的值: - /。

It’s so difficult to find these values anywhere! I wonder why they are not accessible programmatically (at least my research showed me that they are not). It seems that it is necessary to prepare hard-coded values and then check the model of the device the app is running on to decide which of the values to choose when doing all the calculations above :-/.

这篇关于根据与相机的距离缩放AR图片的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆